LRU stands for Least Recently Used page replacement algorithm.
Idea: Keep track of page references to optimize memory management.
Data Structure: Double linked list.
Page references moved to the front of the list for recent access.
Pages are removed from the back during replacement.
Page Replacement Complexity: O(1) for removing from the back of the list.
However, page referencing requires multiple pointer updates (around 6 updates), making it expensive in practice.
Practically, implementing LRU in software is costly and inefficient due to frequent updates for page references.
Direct LRU implementation is not feasible in practice.
Approximation of LRU leads to alternatives such as the Second Chance Algorithm or Clock Algorithm.
This motivates the development of strategies to simplify LRU.
Least Frequently Used (LFU): Another approach using two queues for page management.
Adaptive Replacement Cache (ARC): Further manipulation for memory management.
Structure: Circular arrangement of elements representing pages.
Each element contains:
Page number
Reference bit indicating recent usage.
A clock hand pointing to the last examined element.
Check the reference bit at the clock hand's position.
If 0 (not referenced): Replace the page and advance the hand.
If 1 (referenced): Set reference bit to 0 and advance the hand.
If the page is already in memory, simply set the reference bit to 1 without advancing the hand.
Memory Capacity: 4 pages with a continuous sequence of page requests.
Starting Point: All pages empty leading to initial page faults.
Replacement: The referenced page will replace the one the hand points to, if its reference bit is 0.
Initialize page references in memory.
As pages are accessed (e.g., Page 1): No action if the page is in memory and reference is 1.
When inserting a new page (e.g., Page 5): Assess reference bits; reset to 0 when accessed and determine which page to replace.
Swapping can slow down processes; instead, larger page sizes (e.g., 2MB or 1GB) can increase efficiency but lead to fragmentation.
Static Allocation: Global variables are allocated at program load, remaining throughout the process without needing deallocation.
Dynamic Allocation: Required when memory needs vary, can be done using stack and heap.
Dynamic allocation may lead to fragmentation.
Internal Fragmentation: Allocated memory that is not fully utilized.
External Fragmentation: Small unused spaces between allocations inhibit efficient memory use.
Key requirement: Different lifetimes and allocation sizes contribute to fragmentation issues.
Best Fit: Allocates the smallest block adequate for the request; can lead to fragmentation.
Worst Fit: Allocates the largest block; can also lead to wastage of space without optimization.
First Fit: Allocates the first available block that can meet the request.
Allocations are made in size powers of 2 to minimize fragmentation.
Simplifies management through structured splitting of blocks.
Internal fragmentation still arises as requests are rounded up to the nearest power of 2.