PA

lecture28 - Memory 3 - Page Replacement Algorithms

Assignment Overview

Assignment 3

  • Due Date: In a week and 2 days.

  • Content: Consists of 4 questions, each with multiple parts.

  • Recommendation: Review the assignment ahead of the deadline to grasp its length and complexity.

Assignment 2 Update

  • Status: Distributions to markers in progress.

  • Current Phase: Marking underway.

Memory Management Overview

Multi-Level Page Table

  • Virtual Address: 39 bits length.

  • Page Size: 4 kilobytes, leading to a 12-bit offset.

  • Table Structure: Each level of the page table needs to fit within a single page.

    • Bits per Table: 9 bits per level.

  • Levels of Page Table: 3 total levels ➔ L0, L1, L2.

    • L1 and L0: Multiple exist at the same level.

    • L2: Only one exists, tracked by a register.

Page Table Size

  • Minimum Requirement: 3 pages (1 from L2, 1 from L1, 1 from L0).

    • Total Size: 12 kilobytes.

  • Maximum Calculation:

    • L2: Maximum of 1.

    • L1: 2^9 entries, leading to a maximum of 2^9 pages.

    • L0: Similar logic gives 2^18.

    • Total Pages: Aggregate from all levels leads to 2^20 + 2^9 + 2^18.

  • Size in Memory: Total pages multiplied by 4 kilobytes.

Issues with Multi-Level Page Tables

Main Problems

  • Space Consumption: A single page table for 512 GB of memory uses 1 GB, which is excessive.

  • Memory Access Time: Slower due to multiple pointers being followed, akin to a linked list traversal.

    • Example Scenario: Array operations may lead to repeated accesses within the same page.

Translation Lookaside Buffer (TLB)

  • Purpose: A cache for frequently accessed page table entries to speed up memory access times.

  • Structure: Consists of two fields:

    • Virtual Page Number

    • **Page Table Entry (physical page number).

TLB Access Logic

  • Process Flow:

    1. Check TLB first:

      • TLB Hit ➔ Proceed to memory fetch.

      • TLB Miss ➔ Check the page table next and then memory.

  • Access Count:

    • Hit: 2 accesses (1 TLB + 1 memory).

    • Miss: 3 accesses (2 memory + 1 TLB).

Effective Access Time

  • Concept: Represents the average time taken to access memory considering both hits and misses in TLB.

  • Calculation Example:

    • TLB hit = 1 clock cycle,

    • Page Table access = 10 clock cycles,

    • Memory access = 100 clock cycles,

    • Probability of hit = 80%, miss = 20% leading to formulas factoring in costs of hits and misses.

Page Faults

Definition

  • Page Fault: An exception that occurs when a requested page is not mapped in the physical memory, potentially residing on disk instead.

  • Phases: The OS intervenes to retrieve the memory or handle permission issues when access fails.

Impact of Page Faults

  • Costly in Time: High latency (e.g., fetching data from a disk, approx. 5 ms).

  • Performance Concerns: Frequent page faults can severely degrade system performance.

Context Switching and Memory Management

Context Switches

  • Effect on TLB: A change in the active process may invalidate the TLB due to address space changes.

  • Handling Strategies:

    • Flush the TLB ➔ Clears specified memory cache locations.

    • Use Process IDs ➔ Associates TLB entries with a specific process to prevent flushing.

Process Address Space Initialization

  • Kernel's Role: Sets up initial memory layout for processes including code, data heap, and stacks.

  • Error Handling for Null Pointers: Addresses starting at zero return an error as they are marked invalid by the kernel.

  • Internal Guard Pages: Placed to prevent stack overflows.

Memory Management Units (MMU)

  • Purpose: Translates virtual memory addresses to physical addresses, assisting in efficient memory management.

  • Implementations: Can be single or multi-level addressing structures.

Page Replacement Algorithms

Overview

  • Memory Hierarchy: Balances speed and capacity across different memory levels (cache, RAM, disk).

  • Demand Paging: Transfers pages to RAM as needed; utilizing swap space for memory overflow.

Working Set Concept

  • Definition: The set of pages a process requires over a bounded duration; performance suffers if exceeds physical memory leading to thrashing.

Types of Page Replacement Algorithms

  • Optimal: Replaces the page not used for the longest future duration (only theoretical).

  • FIFO (First In, First Out): Replaces the oldest page regardless of future use.

  • LRU (Least Recently Used): Replaces the least recently accessed page.

FIFO Example Implementation
  • Demonstrates inefficiency of replacing pages just before their next use leading to increased page faults.

LRU Implementation Challenges
  • Hardware Complexity: Difficult to maintain counters for each page efficiently.

  • Software Implementation: Utilize data structures like counters or linked lists for tracking usage.