OS exam Virtual memory

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/50

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

51 Terms

1
New cards

Which of the following are benefits of virtual memory?

  • The ability to allocate more memory than is physically available

  • Decreased application load time

  • Improved I/O efficiency

  • Increased multiprogramming level

  • Smaller virtual address space

  • The ability to allocate more memory than is physically available

  • Decreased application load time

  • Improved I/O efficiency

  • Increased multiprogramming level

2
New cards

There are circumstances in which the virtual page size will NOT equal the physical frame size.

False Pages & Frames are always the same size.

3
New cards

Given a four-level page table, how many memory accesses are required in the worst case (including original access)?

5

4
New cards

A process must have all of its code and data in physical memory in order to execute.

False – only the needed pages must be in memory.

5
New cards

Give two reasons why virtual memory is useful.

  • Allows programs larger than physical memory to run.

  • Increases multiprogramming (more processes in memory).

  • Reduces I/O by loading only needed pages.

  • Gives each process a large logical address space.

6
New cards

What determines the maximum size of a process’s virtual address space?

The number of bits in the virtual address (e.g., 32-bit, 64-bit).

7
New cards

A page is:
A. A piece of physical memory
B. A fixed-size block in the virtual address space
C. A variable-size region in virtual memory

B.

8
New cards

A frame is:
A. A block in virtual address space
B. A fixed-size block in physical memory
C. A TLB entry

B

9
New cards

Virtual and physical memory are both divided into fixed-size blocks. In virtual memory they are called ____, in physical memory they are called ____.

Pages, Frames

10
New cards

What is the role of the MMU (Memory Management Unit)?

Translates virtual addresses to physical addresses; only physical addresses go out on the memory bus.

11
New cards

What is demand paging?

A virtual memory technique where pages are loaded into physical memory only when they are referenced.

12
New cards

In a demand paging system, all pages of a process are loaded into RAM before execution begins.

False.

13
New cards

What does the valid–invalid bit in the page table indicate?

  • v (valid) → page is in memory.

  • i (invalid) → page is not in memory (or illegal), referencing it causes a page fault.

14
New cards

List the main steps on a page fault when the reference is valid but the page is not in memory.

  1. Trap to OS (page fault).

  2. OS finds/allocates a free frame.

  3. Reads the needed page from disk into that frame.

  4. Updates page table and sets valid bit.

  5. Restarts the instruction that caused the fault.

15
New cards

Write the EAT formula for demand paging.

EAT = (1 – p) * mem_time + p * page_fault_time.

16
New cards

If p = 0.001 (1 in 1000 accesses faults) and page_fault_time is huge, what happens to EAT relative to mem_time?

EAT becomes much larger than mem_time; even a small p causes a large slowdown.

17
New cards

Why must the page fault rate be extremely low in practice?

Because each page fault is very expensive (disk I/O, context switches), so even a small fraction significantly increases EAT/slowdown.

18
New cards

What is copy-on-write (COW)?

A technique where parent and child initially share pages in memory; when either modifies a shared page, a private copy of that page is created.

19
New cards

Why is copy-on-write used for process creation?

It avoids copying pages that are never modified, making process creation more efficient (less copying and less memory usage).

20
New cards

Why do we need page replacement?

When there are no free frames and a new page must be loaded, some page must be evicted to free a frame.

  • If no free frame → must choose a victim page.

  • Want to minimize page faults.

  • Use dirty bit to reduce disk writes.

21
New cards

What is the dirty (modify) bit used for in page replacement?

To indicate whether a page has been changed in memory; only dirty pages must be written back to disk when evicted.

22
New cards

How does FIFO page replacement work?

Maintains a queue of pages in the order they were loaded; evicts the oldest page.

23
New cards

What is Belady’s anomaly?

A phenomenon where increasing the number of frames increases the number of page faults (can occur with FIFO).

24
New cards

Describe the Optimal page replacement algorithm.

Evicts the page that will not be used for the longest time in the future.

25
New cards

Why is the Optimal algorithm not implementable in practice?

It requires knowing future memory references, which is impossible.

26
New cards

What is the idea behind LRU (Least Recently Used)?

Uses recent past usage as an approximation of future; evicts the page that has not been used for the longest time in the past.

27
New cards

LRU suffers from Belady’s anomaly.

False – LRU does not have Belady’s anomaly (more frames ⇒ fewer or equal faults).

28
New cards

Name two implementation approaches for true LRU.

  • Counter (logical clock) implementation.

  • Stack (linked list) implementation.

29
New cards

What is the reference bit used for in LRU approximations?

It is set to 1 when a page is referenced; algorithms try to evict pages with reference bit 0.

30
New cards

How does the Second Chance algorithm work (basic idea)?

It’s FIFO with reference bits:

  • Inspect front page.

  • If ref bit = 0 → replace it.

  • If ref bit = 1 → clear the bit, move page to back (give it a “second chance”).

31
New cards

What is equal allocation of frames?

Each process gets the same number of frames.

32
New cards

What is proportional allocation?

Frames are allocated to processes proportional to their size

33
New cards

What is the difference between global and local page replacement?

  • Global: A process can replace any frame in the system (even from other processes).

  • Local: A process can only replace frames allocated to itself.

34
New cards

Why can global replacement lead to thrashing for low-priority processes?

High-priority or active processes may constantly steal frames, leaving too few frames for others, increasing their page-fault rates.

35
New cards

What is thrashing?

A situation where a process has too few frames, causing a very high page-fault rate, so it spends most of its time swapping pages in/out instead of doing useful work.

36
New cards

What happens to CPU utilization during thrashing?

It becomes low, because CPU is often waiting for I/O (page swaps).

37
New cards

What is the “locality model”?

The idea that at any time, a process tends to access a subset of its pages (a locality); it occasionally moves from one locality to another.

38
New cards

Define WSSᵢ (working set of process i).

The set (number) of pages that process i has referenced in its most recent Δ memory references.

39
New cards

What condition using D = Σ WSSᵢ indicates thrashing risk?

If D > m (total demand frames > available frames), the system is likely to thrash.

40
New cards

What does the Page-Fault Frequency (PFF) scheme try to control directly?

The page-fault rate of each process.

41
New cards

In PFF, what happens if a process’s page-fault rate is too high?

The process is given more frames (if available) or may be suspended if no frames can be allocated.

42
New cards

In PFF, what happens if the page-fault rate is too low?

The process may lose frames (some frames reallocated to other processes).

43
New cards

Give one tradeoff involved in choosing page size.

  • Smaller pages → less internal fragmentation, but larger page tables.

  • Larger pages → smaller page tables and more TLB reach, but more fragmentation.

44
New cards

What is pre-paging?

Loading some or all of a process’s pages before they are referenced to reduce page faults at startup.

45
New cards

Define TLB reach.

TLB reach = (TLB size) × (page size) → amount of memory that can be addressed by TLB entries.

46
New cards

How can program structure affect page faults (e.g., nested loops over a 2D array)?

Access patterns that follow memory layout (row-major) have better locality and fewer page faults; poor ordering (e.g., column-first) can cause many more faults.

47
New cards

What is a memory-mapped file?

A file whose disk blocks are mapped directly to pages in a process’s virtual memory so file I/O becomes regular memory accesses.

48
New cards

Compare mmap() vs read() in terms of when data is loaded.

  • read() loads the requested data immediately into a user buffer.

  • mmap() only sets up a mapping; pages are brought into memory on demand when accessed.

49
New cards

What is the buddy system for kernel memory allocation?

Allocates memory in blocks whose sizes are powers of 2; splits and coalesces “buddy” blocks to satisfy requests.

50
New cards

What is a downside of the buddy system?

Internal fragmentation due to rounding requests up to a power of 2 (e.g., 33K → 64K).

51
New cards

What is the slab allocator used for?

Efficiently allocating memory for kernel objects by keeping caches of preinitialized objects in slabs to reduce fragmentation and allocation time.