Reader-Writer Problem
A classic synchronization problem where multiple threads access shared data:
Readers only read the data (no modification).
Writers modify the data.
Goal:
Allow multiple readers to access simultaneously,
but ensure exclusive access for writers (no readers or other writers at the same time)
deadlock
occurs when a group of processes are each waiting for resources that are held by the others — creating a cycle of dependencies that prevents any of them from continuing
1/178
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Reader-Writer Problem
A classic synchronization problem where multiple threads access shared data:
Readers only read the data (no modification).
Writers modify the data.
Goal:
Allow multiple readers to access simultaneously,
but ensure exclusive access for writers (no readers or other writers at the same time)
deadlock
occurs when a group of processes are each waiting for resources that are held by the others — creating a cycle of dependencies that prevents any of them from continuing
Processes stop making progress, Resources are wasted, System may freeze or slow down
why is a deadlock undesirable
Mutual Exclusion, Hold and Wait, No Preemption, Circular Wait
Four Necessary Conditions for Deadlock
Deadlock Possibilities in Multithreaded Code
When two threads try to lock something the opposite thread already locked
Also happens with nested locks, or when lock ordering is inconsistent across threads.
Example: Thread A locks mutex1
, then tries to lock mutex2
while Thread B has mutex2
and is trying to lock mutex1
→ deadlock.
Common with mutexes or semaphores.
Deadlock Detection
Allow deadlocks to happen, but have a way to find them and fix the problem afterward, Flexible, better resource usage
deadlock avoidance
Let processes request resources in ways that could cause deadlock, but control things carefully so deadlock never actually happens, complex and costly to track state
deadlock prevention
Design the system so that deadlocks can’t happen at all by avoiding the conditions that cause them, can hurt performance and fairness
Make resources shareable if possible
deadlock prevention technique to address Mutual Exclusion
require all resources requested at once
deadlock prevention technique to address Hold and wait
Allow OS to forcefully take resources
deadlock prevention technique to address no preemption
forcing a specific order for requesting resources
deadlock prevention technique to address circular wait
Mutual exclusion
Some resources (like a printer) can only be used by one process at a time
hold and wait
A process is holding some resources and is waiting for more
no preemption
The OS can’t take resources away from a process. It has to wait until the process gives them up.
circular wait
There's a loop of processes, each waiting for a resource that the next one has
yes (often necessary, not avoidable)
Appropriateness of Prevention Technique mutual exclusion
no (inefficient, causes resource hogging or starvation)
Appropriateness of Prevention Technique hold and wait
no (risky, can’t safely take a printer mid-job)
Appropriateness of Prevention Technique no preemption
yes (Practical, widely used via resource ordering)
Appropriateness of Prevention Technique circular wait
are in the deadlock cycle, hold many resources, hasn’t done much work yet, appear in multiple cycles
when choosing a processes to terminate to eliminate deadlock, choose processes that
safe state
a system can allocate resources to all processes in some order without leading to deadlock, OS can guarantee completion
unsafe state
The system is not in deadlock yet, but there's no guarantee that all processes can complete. Deadlock might occur depending on how processes behave. The OS can’t ensure safety anymore.
Deadlocked State
A set of processes are stuck waiting for resources that will never be released. The system has entered a state where no progress is possible for some or all processes.
Banker’s Algorithm
a deadlock avoidance method that checks whether granting a resource request will keep the system in a safe state before approving it.
Compile time binding
Memory addresses are chosen when the program is compiled
simple and fast to run
Compile time binding advantages
Must load into the exact memory location it was compiled for; inflexible.
Compile time binding disadvantages
Load-Time Binding
Addresses are chosen when the program is loaded into memory
Can run in different memory locations
Load-Time Binding advantages
Once it starts, the program can’t move
Load-Time Binding disadvantages
Execution-Time Binding
memory addresses are chosen as the program runs
Most flexible (programs can move in memory during execution)
Execution-Time Binding advantages
slow and Requires extra hardware
Execution-Time Binding disadvantages
static linking
All libraries are added directly to the executable at build time, Program is self-contained—no dependencies at runtime, larger file size
dynamic linking
Library code is linked later, at load or runtime, Executable is smaller and can use up-to-date shared libraries, need OS support
Shared Libraries
One library file is shared across many programs, saves memory and storage—only one copy in memory and on disk, needs careful management from the OS to avoid conflicts
Dynamic Loading
Only load code (like a subroutine) when it’s first needed, saves memory and makes startup faster, can slow things down the first time a routine is used
Swapping
when the OS moves a whole process out of memory (to disk) to free up space for other processes
disk (Stores the swapped-out process's memory), CPU and MMU (coordinate the swap)
hardware involved with swapping
memory is full and processes need space,
low priority processes are chosen to be swapped out
PCB stays in memory so the process can resume
transfer time is the biggest cost (swapping large processes takes time)
when would swapping be performed
contiguous allocation
Each process gets one big chunk of memory (a ____ block). Managed using base and limit registers. Simple to implement, but leads to fragmentation problems
first-fit
Allocate the first hole big enough
fast
can leave many small unusable holes
best-fit
use the smallest hole that still works
reduces wasted space
slower search, leftover hole may be too small
worst-fit
use the biggest hole
may leave usable leftovers
can still waste space badly
next-fit
Allocate the first hole big enough, start the search from where you left off last time
50-percent rule
when using first-fit to allocate memory, for every chunk of memory used, about half that size gets wasted to external fragmentation, so around one-third of all memory can't actually be used
External fragmentation
occurs when free memory is scattered in small chunks between allocated blocks, making it impossible to fit larger processes even though there’s enough total free space, caused by dynamic memory allocation (first-fit, best fit strategies)
Internal fragmentation
when a process is given slightly more memory than it needs, leaving unused space inside the allocated block that can’t be used by others
logical address
also called a virtual address, address generated by the CPU when a program runs.
It's what the program thinks it's using
physical address
The actual location in main memory (RAM) where data is stored.
It’s controlled by the Operating System and hardware.
address translation
lets a process use its own virtual addresses without knowing where it's actually stored in physical memory, making memory management more flexible, secure, and efficient
Uses virtual instead of physical addresses
Hides actual memory locations
Supports moving processes in memory
Improves memory protection
Enables paging and segmentation
purpose of address translation
Dynamic memory allocation (like first-fit, best-fit strategies), in systems that require contiguous memory
External fragmentation caused by
Fixed-size partitioning, Paging
internal fragmentation caused by
Single-level paging
a memory management scheme where a logical address is split into a page number and offset, with the page number used to look up a frame number in the page table, and the frame number combined with the offset to find the physical address
multi-level paging
breaks a logical address into multiple page numbers and an offset, using the first page number to access a page directory and the next to access a page table, which helps save memory by only creating page tables when needed—especially in large address spaces
valid bit
Indicates whether the page is currently mapped to a physical frame in memory (valid) or not (invalid)
Read-only bit
Specifies if the page is protected against writing, allowing only read access (read-only)
Modified (dirty) bit
Marks if the page has been modified (written to) since it was loaded into memory
Reference (used) bit
Tracks whether the page has been accessed (read or written to) recently, helping the OS manage page replacement
page table creation
Step 1: OS allocates a page table when a process starts.
Step 2: Sets PTBR to point to the page table.
Step 3: Fills entries on demand as memory is accessed.
shared memory
The OS lets multiple processes point to the same memory to save space, often for read-only data like code
(page table built for) Copy-on-Write (COW)
After a fork, the parent and child share memory until one writes, triggering the OS to copy the page and give them separate versions.
Growing logical address space
The OS allows parts like the stack or heap to grow by adding pages only when they’re actually needed
TLB (Translation Lookaside Buffer)
a small, fast cache that stores recent page table entries to speed up address translation
What TLB does
reduces the time needed for each memory access by avoiding a full page table lookup when the translation is already cached
Deriving EMAT (effective memory access time)
If memory access time is m and the TLB hit ratio is α, then EMAT = αm + (1 - α)(2m) for single-level paging, since a TLB hit takes one access and a miss takes two; for two-level paging, a TLB miss takes three accesses, so EMAT = αm + (1 - α)(3m)
consumes excessive memory (especially for sparse address spaces, as it needs an entry for every page in the virtual address space)
problem with very large single-level page table
page-table length register (how it Alleviates Page Table Size issue)
stores the size of the page table, allowing the operating system to know where the valid entries end. This prevents the OS from having to allocate memory for the maximum possible number of pages, reducing unnecessary memory usage for small processes
multi-level paging (how it Alleviates Page Table Size issue)
breaks the page table into smaller parts (like a tree structure), so only the needed parts of the page table are kept in memory. This reduces memory usage by avoiding allocation for unused virtual address space.
Hashed Page Tables (how it Alleviates Page Table Size issue)
use a hash function to index into the page table, allowing the table to be sparse and much smaller. This is especially effective for large address spaces with few used pages, reducing the size and lookup time.
Inverted Page Tables (how it Alleviates Page Table Size issue)
Instead of keeping an entry for every possible virtual page, it keeps one entry for each physical memory frame. This means the table’s size depends on how much real memory you have, not how big the virtual memory is. So, it uses much less memory for the page table, even when the virtual address space is huge.
Demand Paging
a memory management scheme where pages of a program are loaded into memory only when they are required (i.e., on-demand). This reduces memory usage because only the pages actively needed by a program are loaded into physical memory, rather than the entire program
what demand paging does
When a program accesses a page that is not in memory (a page fault occurs), the operating system brings the page into memory from secondary storage (usually a disk).
Page Fault Occurs (The process tries to access a page that is not in memory)
Trap to the OS (The hardware triggers an interrupt (trap), and the OS takes control)
Check for Valid Access (The OS checks if the access is valid)
if invalid - process terminated
Find a Free Frame (The OS checks for free frames in memory. If none are available, it must decide which page to evict)
Swap Page from Disk (The OS loads the page from the backing store (disk) into memory)
Update Page Table (The OS updates the page table to reflect that the page is now in memory and marks it as valid)
Resume Process
Steps that Occur on Page Fault
Pure Demand Paging
a variant of demand paging where no pages are loaded at program startup. Pages are only loaded when needed, reducing startup time and allowing more processes to run with limited memory.
FIFO (first in, first out)
Replaces the oldest loaded page first, easy to implement but can evict useful pages, causing more page faults
Optimal page replacement
Replaces the page not needed for the longest future time, offering the lowest possible page fault rate but needs impossible future knowledge.
LRU (Least Recently Used)
Replaces the least recently used page, performing close to optimal and easy to approximate but requiring costly tracking of access history.
Second Chance
Enhances FIFO (first in first out) by giving recently accessed pages a second chance, smarter than FIFO and effective in practice, less accurate than LRU
Belady's anomaly
Sometimes, adding more memory frames can surprisingly cause more page faults with certain algorithms like FIFO.
stack algorithm
An algorithm where adding more frames never causes more page faults because the pages in fewer frames (f frames) are always a subset of those that can be held with f+1
frames,
Stack Algorithms Avoid Belady’s Anomaly
____ avoids _____ by always keeping more pages when given more memory, without messing up the page replacement order
Local Page Replacement
Each process has its own set of frames, and pages are replaced only within that process's allocated memory.
Global Page Replacement
A single pool of frames is shared by all processes, and pages can be replaced across processes
Local Page Replacement (advantages/disadvantages)
Prevents processes from affecting each other’s memory, can waste memory if a process has more frames than needed
Global Page Replacement (advantages/disadvantages)
Uses memory more efficiently by sharing frames, one process can cause page faults in others
trashing
happens when the system spends more time swapping pages than running programs, causing it to slow down badly
what causes thrashing
too many processes competing for too little memory or by giving processes too few frames
what remedies thrashing
giving processes more frames, using better replacement algorithms, reducing the number of processes, or managing working sets more carefully
working set model
A process’s working set is the set of pages it is actively using, and keeping the full set in memory helps it run efficiently as the set changes over time.
Localities
Programs show temporal locality (recent data used again soon) and spatial locality (nearby data accessed together).
Arrays
have good spatial locality because elements are stored next to each other in memory
Linked lists
generally have poor spatial locality because their nodes can be scattered across memory
hash tables
can have mixed locality, depending on the implementation — accessing random buckets can hurt spatial locality
sorting algorithm
usually have good locality because they tend to work on nearby data
Matrix operation, copying row by row
good locality (because memory is laid out in rows)
Matrix operation, copying column by column
bad for locality (jumps across non-adjacent memory)
matrix multiplication
can be optimized for locality by reordering loops or using block-based multiplication