Due Date: In a week and 2 days.
Content: Consists of 4 questions, each with multiple parts.
Recommendation: Review the assignment ahead of the deadline to grasp its length and complexity.
Status: Distributions to markers in progress.
Current Phase: Marking underway.
Virtual Address: 39 bits length.
Page Size: 4 kilobytes, leading to a 12-bit offset.
Table Structure: Each level of the page table needs to fit within a single page.
Bits per Table: 9 bits per level.
Levels of Page Table: 3 total levels ➔ L0, L1, L2.
L1 and L0: Multiple exist at the same level.
L2: Only one exists, tracked by a register.
Minimum Requirement: 3 pages (1 from L2, 1 from L1, 1 from L0).
Total Size: 12 kilobytes.
Maximum Calculation:
L2: Maximum of 1.
L1: 2^9 entries, leading to a maximum of 2^9 pages.
L0: Similar logic gives 2^18.
Total Pages: Aggregate from all levels leads to 2^20 + 2^9 + 2^18.
Size in Memory: Total pages multiplied by 4 kilobytes.
Space Consumption: A single page table for 512 GB of memory uses 1 GB, which is excessive.
Memory Access Time: Slower due to multiple pointers being followed, akin to a linked list traversal.
Example Scenario: Array operations may lead to repeated accesses within the same page.
Purpose: A cache for frequently accessed page table entries to speed up memory access times.
Structure: Consists of two fields:
Virtual Page Number
**Page Table Entry (physical page number).
Process Flow:
Check TLB first:
TLB Hit ➔ Proceed to memory fetch.
TLB Miss ➔ Check the page table next and then memory.
Access Count:
Hit: 2 accesses (1 TLB + 1 memory).
Miss: 3 accesses (2 memory + 1 TLB).
Concept: Represents the average time taken to access memory considering both hits and misses in TLB.
Calculation Example:
TLB hit = 1 clock cycle,
Page Table access = 10 clock cycles,
Memory access = 100 clock cycles,
Probability of hit = 80%, miss = 20% leading to formulas factoring in costs of hits and misses.
Page Fault: An exception that occurs when a requested page is not mapped in the physical memory, potentially residing on disk instead.
Phases: The OS intervenes to retrieve the memory or handle permission issues when access fails.
Costly in Time: High latency (e.g., fetching data from a disk, approx. 5 ms).
Performance Concerns: Frequent page faults can severely degrade system performance.
Effect on TLB: A change in the active process may invalidate the TLB due to address space changes.
Handling Strategies:
Flush the TLB ➔ Clears specified memory cache locations.
Use Process IDs ➔ Associates TLB entries with a specific process to prevent flushing.
Kernel's Role: Sets up initial memory layout for processes including code, data heap, and stacks.
Error Handling for Null Pointers: Addresses starting at zero return an error as they are marked invalid by the kernel.
Internal Guard Pages: Placed to prevent stack overflows.
Purpose: Translates virtual memory addresses to physical addresses, assisting in efficient memory management.
Implementations: Can be single or multi-level addressing structures.
Memory Hierarchy: Balances speed and capacity across different memory levels (cache, RAM, disk).
Demand Paging: Transfers pages to RAM as needed; utilizing swap space for memory overflow.
Definition: The set of pages a process requires over a bounded duration; performance suffers if exceeds physical memory leading to thrashing.
Optimal: Replaces the page not used for the longest future duration (only theoretical).
FIFO (First In, First Out): Replaces the oldest page regardless of future use.
LRU (Least Recently Used): Replaces the least recently accessed page.
Demonstrates inefficiency of replacing pages just before their next use leading to increased page faults.
Hardware Complexity: Difficult to maintain counters for each page efficiently.
Software Implementation: Utilize data structures like counters or linked lists for tracking usage.