1/114
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Memory
Memory is a large array of words or bytes, each with its own address, serving as a repository of quickly accessible data shared by the CPU and I/O devices.
Volatility of Main Memory
Main memory is volatile, meaning its contents are lost during system failure or power loss.
Key Activities of the Operating System in Memory Management
The OS keeps track of used memory parts, decides which processes to load next when space is available, and handles memory allocation and deallocation.
Bringing a Program into Memory
A program initially resides on disk as a binary executable file and must be loaded into memory by the operating system to be executed.
Multiple Processes in Memory
To improve CPU utilization and speed, operating systems often keep multiple processes in memory at the same time, requiring effective memory management schemes.
User Process Protection in Physical Memory
Most systems allow user processes to reside in any part of physical memory and provide separate memory areas to prevent one process from interfering with another.
Address Binding
Address binding is the process of mapping symbolic addresses in source programs to actual memory addresses, which can be re-locatable or absolute.
Compile-time Binding
Compile-time binding occurs when addresses are known beforehand.
Load-time Binding
Load-time binding occurs when addresses are assigned during loading.
Execution-time Binding
Execution-time binding happens during program execution, allowing movement between memory segments.
Role of the Compiler in Address Binding
The compiler binds symbolic addresses to re-locatable addresses during program compilation.
Loader/Linkage Editor in Address Binding
The loader or linkage editor binds re-locatable addresses to absolute addresses when loading the program into memory.
Logical Address Space
Logical address space is generated by the CPU (virtual address), while physical address space is the actual address seen by the memory hardware.
Difference Between Logical and Physical Addresses
In compile-time and load-time binding, logical and physical addresses are the same; in execution-time binding, they differ, with the MMU performing run-time address translation.
Memory Management Unit (MMU)
The MMU performs run-time mapping from virtual (logical) addresses to physical addresses, using a relocation register to add the base address to each virtual address.
Dynamic Loading
Dynamic loading is a technique where a routine is loaded into memory only when it is called, optimizing memory usage by loading necessary code on demand.
Dynamic Linking
Dynamic linking delays the linking process until execution time, often used for system libraries, allowing shared use and efficient memory utilization.
Purpose of a Stub in System Libraries
A stub is a small piece of code used to locate and execute the appropriate memory-resident library routine, avoiding each user program including a copy of system libraries.
Swapping in Operating Systems
Swapping involves temporarily moving a process from memory to a backing store (a large disk) and later bringing it back into memory for continued execution, helping manage memory resources.
Backing Store in Swapping
The backing store is a fast, large disk that stores copies of all memory images for all users and provides direct access for swapping processes in and out of memory.
CPU scheduler
Selects a process for execution.
Dispatcher
Checks if the process is in memory; if not, it may swap out other processes to make space.
Swap time
The major part of swap time is transfer time, which is proportional to the amount of memory being swapped.
Compile or load time binding
A swapped-out process is swapped back into the same memory space it occupied before.
Problem with swapped-out processes
A problem is if a swapped-out process is performing I/O, which can be mitigated by latching the job in memory during I/O or performing I/O only into OS buffers.
Contiguous memory allocation
Involves dividing main memory into partitions where each process resides in a single, continuous memory location, typically used in older systems.
Mapping and protection in contiguous allocation
Achieved using relocation and limit registers; the relocation register holds the smallest physical address, and the limit register defines the range of legal logical addresses.
Fixed partitions
Have a set size.
Variable-sized partitions
Tracked by the OS, which manages allocated and free holes, allowing more flexible memory allocation.
Dynamic storage-allocation problem
Involves how to satisfy a memory request of size 'n' from a list of free holes, using strategies like first-fit, best-fit, or worst-fit.
First-fit strategy
Allocates the first sufficiently large hole.
Best-fit strategy
Allocates the smallest suitable hole.
Worst-fit strategy
Allocates the largest hole, aiming to leave the largest leftover space.
External fragmentation
Occurs when enough total memory exists for a request but is not contiguous, making it impossible to allocate memory despite sufficient total free space.
Internal fragmentation
Happens when allocated memory exceeds the requested size, leaving unused space within a partition.
Compaction
Shuffles memory contents to consolidate free memory into one large block, but it requires dynamic relocation and incurs a cost proportional to the amount of memory moved.
Paging
Eliminates external fragmentation by dividing memory into fixed-size pages, allowing non-contiguous allocation of physical memory.
Physical memory in paging system
Divided into fixed-sized blocks called frames.
Logical memory in paging system
Divided into fixed-sized blocks called pages, with page size equal to frame size.
Page table
Used to translate logical addresses to physical addresses by storing the base address of each page in physical memory.
Address translation in paging system
The CPU-generated address is divided into a page number and a page offset; the page number is used as an index into the page table to find the base address, which is then combined with the offset to form the physical address.
Page table base register (PTBR)
Points to the location of the page table in main memory.
Page table length register (PTLR)
Indicates the size of the page table.
Two-memory-access problem
Accessing data requires two memory accesses: one for the page table and one for the data.
Translation Look-Aside Buffer (TLB)
A TLB is a special hardware cache that performs parallel searches for page numbers, providing quick access to frame numbers and reducing address translation time.
TLB hits and misses
On a TLB hit, the frame number is retrieved quickly. On a miss, the frame number is obtained from the page table in memory, and the TLB may update its entries using algorithms like LRU.
Address Space Identifiers (ASIDs)
ASIDs identify different processes within TLB entries, allowing the TLB to hold entries for multiple processes simultaneously and avoiding the need to flush the TLB during context switches.
Effective Access Time (EAT)
EAT is the average time to access memory, considering the TLB hit ratio, TLB lookup time, and memory cycle time, calculated as EAT
TLB size and system performance
A larger TLB generally increases the hit ratio, leading to a lower EAT and improved system performance.
Memory protection in paging
Memory protection is implemented by associating a protection bit with each frame and using valid-invalid bits in the page table to indicate whether a page is in the process's logical address space and in memory.
Valid and invalid bits in the page table
A valid bit indicates the page is in the process's logical address space and in memory, while an invalid bit indicates the page is either not valid or stored on disk.
Frame size as a power of 2
Having frame size as a power of 2 simplifies address translation by allowing easier calculation and division of addresses.
Internal fragmentation in paging
Internal fragmentation occurs because fixed-sized frames may not be fully utilized by processes, leaving unused space within frames.
Operating systems using paging
Unix, Linux, and Windows.
Additional bits in access permissions
Additional bits can be used for access permissions like read-write or read-only. The Memory Management Unit (MMU) checks accesses against these bits and traps to the OS on illegal attempts.
Challenges of large logical address spaces
Large logical address spaces (e.g., 2^32 to 2^64) can cause page tables to become very large, making management and access more complex.
Hierarchical paging
Hierarchical paging divides the page table into smaller pieces, typically in a two-level scheme where the page table is paged. Address translation involves indexing into an outer page table and then into an inner page table, with caching helping maintain performance.
Multi-level paging
Multi-level paging involves multiple levels of page tables, requiring multiple memory accesses for address translation. Caching these tables helps maintain reasonable performance despite the multiple accesses.
Hashed page tables
Hashed page tables are used for address spaces larger than 32 bits. Virtual page numbers are hashed into a table containing chains of elements, each with the virtual page number, mapped frame, and a pointer to the next element.
Clustered page tables
Clustered page tables are similar to hashed page tables but organize entries in clusters for improved access.
Inverted page tables
Use a single table for all processes, with one entry per real page of memory, containing the virtual address and process ID.
Advantages of inverted page tables
Reduce memory needed but increase search time, which can be mitigated with hashing.
Reentrant code sharing in paging
One copy of read-only code can be shared among processes, while each process maintains separate copies of private code and data.
Segmentation in memory management
A memory management scheme that presents memory as a collection of logical segments (e.g., main program, procedures, variables, stack).
Structure of a logical address in segmentation
A two-tuple:
Purpose of the segment table in segmentation
Maps two-dimensional logical addresses to one-dimensional physical addresses, with each entry containing a base and a limit.
Segment-table base register (STBR)
Points to the segment table's location.
Segment-table length register (STLR)
Indicates the number of segments used by a program, aiding in checking the legality of segment numbers.
Memory allocation strategies in segmentation
Can use first-fit or best-fit strategies, which may lead to external fragmentation.
Drawbacks of memory allocation in segmentation
External fragmentation makes it difficult to find contiguous free space for segments.
Memory protection in segmentation
Provided by associating a validation bit and read/write/execute privileges with each segment table entry.
Virtual memory
Separates user logical memory from physical memory, allowing programs to execute even if only partially in memory.
Primary purpose of virtual memory
Overcomes physical memory limitations.
Advantages of virtual memory
Programs are not constrained by physical memory size, enabling more concurrent programs and reducing I/O for loading/swapping.
Demand paging
A memory management scheme that brings a page into memory only when it is needed.
Benefits of demand paging
Reduces I/O and memory requirements, providing faster response and allowing memory sharing among more users.
Hardware support for demand paging
Requires a valid-invalid bit in the page table to indicate if a page is in memory.
Page request when page is in memory
The Memory Management Unit (MMU) proceeds as usual without interruption.
Page request when page is not in memory
The MMU generates a page-fault trap to the operating system to handle the situation.
Steps involved in handling a page fault
Pure demand paging
Pure demand paging is a scheme where a page is only brought into memory when it is required, with a process starting with no pages in memory and page faults occurring until all needed pages are loaded.
Page fault handling
The CPU needs to restart instructions after a page fault regardless of whether the fault occurred during instruction fetch or operand fetch to ensure correct program execution.
Page replacement
Page replacement occurs when a process needs a page but there are no free frames; the OS selects an unused page in memory to swap out, enabling large virtual memory on smaller physical memory.
Frame-allocation algorithm
A frame-allocation algorithm determines how many frames to allocate to each process, preventing over-allocation of memory during demand paging.
Page replacement algorithms
Page replacement algorithms aim to minimize the page-fault rate by selecting which pages to replace based on specific strategies, evaluated using reference strings.
FIFO page replacement algorithm
FIFO replaces the oldest page in memory. It can suffer from Belady's anomaly, where increasing the number of frames results in more page faults.
Optimal (OPT) page replacement algorithm
OPT replaces the page that will not be used for the longest time in the future. It has the lowest page fault rate but is difficult to implement because it requires future knowledge of reference strings.
Least Recently Used (LRU) page replacement algorithm
LRU replaces the page that has not been used for the longest time. It generally performs well and does not suffer from Belady's anomaly, but requires hardware assistance to track recency.
LRU approximation using hardware
LRU can be approximated with a reference bit set when a page is referenced. The algorithm replaces pages with a reference bit of 0, although it doesn't know the exact order of non-referenced pages.
Additional-reference-bits algorithm
It keeps a history of reference bits over time, often in an 8-bit shift register, and replaces the page with the smallest value.
Second chance algorithm
A FIFO-based algorithm that uses the reference bit; if the page to be replaced has its reference bit set, the bit is cleared, and the page is given a second chance.
Enhanced second-chance algorithm
It incorporates the modify bit to prioritize replacement of pages that haven't been used or modified.
Page Buffering Algorithm
This algorithm maintains a pool of free frames; when a page fault occurs, a victim frame is selected, the desired page is read into a free frame before the victim is written out, allowing the process to restart sooner; the victim frame is later written to disk and added back to the free frame pool.
Frame Allocation among processes
Frames must be allocated among processes based on total available frames and minimum requirements; too few frames increase page faults. Allocation schemes include fixed (equal or proportional) and priority-based, where higher priority processes get more frames.
Global replacement
Global replacement allows processes to select replacement frames from all system frames, potentially increasing throughput but reducing process control over page faults.
Local replacement
Local replacement restricts processes to their own allocated frames, keeping the number constant.
Thrashing in memory management
Thrashing occurs when a process spends more time paging than executing, leading to a significant decrease in performance.
Thrashing
Occurs when a process has insufficient frames, leading to a high page-fault rate, low CPU utilization, and excessive paging, which hampers overall system productivity.
Scenario leading to thrashing
When the OS increases multiprogramming due to low CPU utilization, processes start page faulting and stealing frames from others, causing more page faults and a cycle of repeated thrashing.
Reducing thrashing
Decreasing the degree of multiprogramming or using local (or priority) replacement algorithms, which slow down thrashing and reduce its impact.