1/50
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Which of the following are benefits of virtual memory?
The ability to allocate more memory than is physically available
Decreased application load time
Improved I/O efficiency
Increased multiprogramming level
Smaller virtual address space
The ability to allocate more memory than is physically available
Decreased application load time
Improved I/O efficiency
Increased multiprogramming level
There are circumstances in which the virtual page size will NOT equal the physical frame size.
False Pages & Frames are always the same size.
Given a four-level page table, how many memory accesses are required in the worst case (including original access)?
5
A process must have all of its code and data in physical memory in order to execute.
False – only the needed pages must be in memory.
Give two reasons why virtual memory is useful.
Allows programs larger than physical memory to run.
Increases multiprogramming (more processes in memory).
Reduces I/O by loading only needed pages.
Gives each process a large logical address space.
What determines the maximum size of a process’s virtual address space?
The number of bits in the virtual address (e.g., 32-bit, 64-bit).
A page is:
A. A piece of physical memory
B. A fixed-size block in the virtual address space
C. A variable-size region in virtual memory
B.
A frame is:
A. A block in virtual address space
B. A fixed-size block in physical memory
C. A TLB entry
B
Virtual and physical memory are both divided into fixed-size blocks. In virtual memory they are called ____, in physical memory they are called ____.
Pages, Frames
What is the role of the MMU (Memory Management Unit)?
Translates virtual addresses to physical addresses; only physical addresses go out on the memory bus.
What is demand paging?
A virtual memory technique where pages are loaded into physical memory only when they are referenced.
In a demand paging system, all pages of a process are loaded into RAM before execution begins.
False.
What does the valid–invalid bit in the page table indicate?
v (valid) → page is in memory.
i (invalid) → page is not in memory (or illegal), referencing it causes a page fault.
List the main steps on a page fault when the reference is valid but the page is not in memory.
Trap to OS (page fault).
OS finds/allocates a free frame.
Reads the needed page from disk into that frame.
Updates page table and sets valid bit.
Restarts the instruction that caused the fault.
Write the EAT formula for demand paging.
EAT = (1 – p) * mem_time + p * page_fault_time.
If p = 0.001 (1 in 1000 accesses faults) and page_fault_time is huge, what happens to EAT relative to mem_time?
EAT becomes much larger than mem_time; even a small p causes a large slowdown.
Why must the page fault rate be extremely low in practice?
Because each page fault is very expensive (disk I/O, context switches), so even a small fraction significantly increases EAT/slowdown.
What is copy-on-write (COW)?
A technique where parent and child initially share pages in memory; when either modifies a shared page, a private copy of that page is created.
Why is copy-on-write used for process creation?
It avoids copying pages that are never modified, making process creation more efficient (less copying and less memory usage).
Why do we need page replacement?
When there are no free frames and a new page must be loaded, some page must be evicted to free a frame.
If no free frame → must choose a victim page.
Want to minimize page faults.
Use dirty bit to reduce disk writes.
What is the dirty (modify) bit used for in page replacement?
To indicate whether a page has been changed in memory; only dirty pages must be written back to disk when evicted.
How does FIFO page replacement work?
Maintains a queue of pages in the order they were loaded; evicts the oldest page.
What is Belady’s anomaly?
A phenomenon where increasing the number of frames increases the number of page faults (can occur with FIFO).
Describe the Optimal page replacement algorithm.
Evicts the page that will not be used for the longest time in the future.
Why is the Optimal algorithm not implementable in practice?
It requires knowing future memory references, which is impossible.
What is the idea behind LRU (Least Recently Used)?
Uses recent past usage as an approximation of future; evicts the page that has not been used for the longest time in the past.
LRU suffers from Belady’s anomaly.
False – LRU does not have Belady’s anomaly (more frames ⇒ fewer or equal faults).
Name two implementation approaches for true LRU.
Counter (logical clock) implementation.
Stack (linked list) implementation.
What is the reference bit used for in LRU approximations?
It is set to 1 when a page is referenced; algorithms try to evict pages with reference bit 0.
How does the Second Chance algorithm work (basic idea)?
It’s FIFO with reference bits:
Inspect front page.
If ref bit = 0 → replace it.
If ref bit = 1 → clear the bit, move page to back (give it a “second chance”).
What is equal allocation of frames?
Each process gets the same number of frames.
What is proportional allocation?
Frames are allocated to processes proportional to their size
What is the difference between global and local page replacement?
Global: A process can replace any frame in the system (even from other processes).
Local: A process can only replace frames allocated to itself.
Why can global replacement lead to thrashing for low-priority processes?
High-priority or active processes may constantly steal frames, leaving too few frames for others, increasing their page-fault rates.
What is thrashing?
A situation where a process has too few frames, causing a very high page-fault rate, so it spends most of its time swapping pages in/out instead of doing useful work.
What happens to CPU utilization during thrashing?
It becomes low, because CPU is often waiting for I/O (page swaps).
What is the “locality model”?
The idea that at any time, a process tends to access a subset of its pages (a locality); it occasionally moves from one locality to another.
Define WSSᵢ (working set of process i).
The set (number) of pages that process i has referenced in its most recent Δ memory references.
What condition using D = Σ WSSᵢ indicates thrashing risk?
If D > m (total demand frames > available frames), the system is likely to thrash.
What does the Page-Fault Frequency (PFF) scheme try to control directly?
The page-fault rate of each process.
In PFF, what happens if a process’s page-fault rate is too high?
The process is given more frames (if available) or may be suspended if no frames can be allocated.
In PFF, what happens if the page-fault rate is too low?
The process may lose frames (some frames reallocated to other processes).
Give one tradeoff involved in choosing page size.
Smaller pages → less internal fragmentation, but larger page tables.
Larger pages → smaller page tables and more TLB reach, but more fragmentation.
What is pre-paging?
Loading some or all of a process’s pages before they are referenced to reduce page faults at startup.
Define TLB reach.
TLB reach = (TLB size) × (page size) → amount of memory that can be addressed by TLB entries.
How can program structure affect page faults (e.g., nested loops over a 2D array)?
Access patterns that follow memory layout (row-major) have better locality and fewer page faults; poor ordering (e.g., column-first) can cause many more faults.
What is a memory-mapped file?
A file whose disk blocks are mapped directly to pages in a process’s virtual memory so file I/O becomes regular memory accesses.
Compare mmap() vs read() in terms of when data is loaded.
read() loads the requested data immediately into a user buffer.
mmap() only sets up a mapping; pages are brought into memory on demand when accessed.
What is the buddy system for kernel memory allocation?
Allocates memory in blocks whose sizes are powers of 2; splits and coalesces “buddy” blocks to satisfy requests.
What is a downside of the buddy system?
Internal fragmentation due to rounding requests up to a power of 2 (e.g., 33K → 64K).
What is the slab allocator used for?
Efficiently allocating memory for kernel objects by keeping caches of preinitialized objects in slabs to reduce fragmentation and allocation time.