Atomicity: A property of an operation that guarantees it is executed as a single, indivisible unit, preventing partial execution or interruption.
Barrier: A synchronization mechanism that forces a set of threads to wait until all threads reach a specific point before any can proceed.
Blocked Process: A process that is waiting for an event, such as I/O completion, to occur before it can continue execution.
Context Switching: The process of saving the state of a running process and loading the state of another process to allow the CPU to switch between them.
Critical Section: A section of code that accesses shared resources and must be executed atomically to prevent race conditions.
Deadlock: A situation in which two or more threads are blocked indefinitely, waiting for each other to release the resources they need.
External Fragmentation: Unused memory space outside any allocated regions, making it difficult to allocate large contiguous blocks of memory.
fork(): A system call that creates a duplicate (child) of the calling process.
Internal Fragmentation: Unused memory space within an allocated region, leading to inefficient memory utilization.
Kernel: The core of an operating system that manages system resources and provides essential services to other software.
Lock: A synchronization mechanism that protects a critical section by ensuring that only one thread can acquire the lock at a time.
Multiprogramming: A technique that allows multiple programs to be loaded into memory and share CPU time, improving CPU utilization.
Mutual Exclusion: The requirement that only one thread can access a critical section at a time, preventing race conditions.
Page Fault: An interrupt that occurs when a process tries to access a page that is not currently in physical memory.
Page Table: A data structure that maps virtual addresses to physical addresses in a paging-based virtual memory system.
Paging: A memory management technique that divides virtual and physical memory into fixed-size blocks called pages and frames, respectively.
Process: A dynamic instance of a program in execution, with its own state and resources.
Process Control Block (PCB): A data structure used by the operating system to store information about a process.
Program: A static set of instructions written in a programming language.
Race Condition: A situation that occurs when the behavior of a program depends on the unpredictable timing of multiple threads accessing shared resources.
Ready Process: A process that is loaded into memory and is ready to run but is waiting for its turn on the CPU.
Running Process: A process that is currently executing on the CPU.
Semaphore: A synchronization variable that allows threads to control access to shared resources and coordinate their execution.
System Call: A request made by a program to the operating system kernel for a specific service.
Thread: A lightweight unit of execution that shares the address space and resources of a process, allowing for concurrent execution within a process.
Timesharing: A technique that allows multiple users or processes to share a computer system by rapidly switching the CPU between them.
Translation Lookaside Buffer (TLB): A cache that stores recent virtual-to-physical address translations to speed up memory access.
Virtual Address Space (VAS): The set of memory addresses that a process can use, which may be larger than the available physical memory.
Virtual Memory: A memory management technique that allows a process to use more memory than is physically available by using disk storage as an extension of RAM.
Virtualization: The process of creating a virtual representation of a physical resource, providing abstraction and flexibility.
wait(): A system call used by a parent process to wait for the termination of a child process.