1/113
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Demand Paging
A form of paging where pages are on disk and brought into memory when needed. Unless it is pure ________, the pages with initial code and data are in memory. Any time a page tries to be accessed that isn’t in memory, it is a page fault.
Swap Space
Portion of the disk reserved for page data modified by a program, such as the heap/stack pages
Temporal Locality
If a process accesses an item in memory, it will tend to reference the same item again soon.
Spatial Locality
If a process accesses an item in memory, it will tend to reference an adjacent item soon.
Effective Access Time (with Demand Paging)
(1-p)* memory access time + p * (page fault time)
p = page fault rate, where 0 is no page faults and 1 is only page faults
FIFO (Page Replacement)
Evicting the oldest page
Simple to implement, but doesn’t consider how frequently a page has been accessed, and can lead to increased disk access
OPT (MIN, Belady’s algorithm)
A theoretical algorithm that evicts the page that will be the latest page to be accessed.
LRU (Least Recently Used)
Evicts the page that hasn’t been used longer than the others. It is a good approximation of Belady’s algorithm, but fails if locality is low (which it usually isn’t in most programs).
Can be implemented with time stamps for each page or a linked list of pages, being expensive in time and space respectively.

Belady’s Anomaly
When adding another memory frame leads to more page faults, especially with FIFO and sometimes with LRU
Additional Reference Bits Algorithm
8-bit algorithm that was used by Linux to approximate LRU quickly. With an 8 bit value, a page’s high bit is set to 1 on reference, and the value is shifted right on a regular interval or each memory access. During a page fault, the lowest numbered page is kicked out.
Second-Chance Algorithm
An algorithm for approximating LRU. Each frame is kept in a circular list and has a single bit associated with it. On reference, the bit is set to 1. On a page fault, the OS checks the reference bit of the next frame that has a page. If it’s 0, the page is replaced, set to 1, and the page is replaced. If it’s 1, the bit is set to 0 and the pointer goes to the next frame.
The worst case, although not that often, occurs when all pages in he list have a bit of 1.
Enhanced Second-Chance Algorithm
An approximation of LRU that uses a reference bit and a modify bit, with a circular list of the frames. The modify bit being 1 indicates that the page was modified. For (r,m):
(1,1) becomes (0,1)
(1,0) becomes (0,0)
(0,1) locks the page in memory until the completion of the I/O to write out the page, and becomes (0,0) once finished
(0,0) is replaced
Counting-Based Algorithms
LRU approximation algorithms that keep a counter of the number of references that have been made to each page. With Least Frequently Used, the page with the smallest count is replaced, although this might be a new page. With Most Frequently Used, the highest count is replaced.
Fixed Allocation
The initial allocation of pages/frames when programs start.
Equal Allocation - Each process gets an equal portion of the free frames, failing to deal with processes of different sizes (internal fragmentation)
Proportional/Variable Allocation - Frames are allocated based on process size. Allocation = process size / (all process’ sizes * total number of frames)
Global Allocation
For page replacement. A process selects a frame from the entire set of frames, meaning one process can take a frame from another. A single LRU algorithm is used for the entire set of pages.
It’s flexible, has greater throughput and flexible memory usage, making it more common.
However, due to lack of per-process isolation, thrashing can occur a lot, leading to varying execution time.
Thrashing
High amount of page replacements
Local Allocation
When during a page fault, frames are only selected from a process’s set of allocated frames. Therefore, the page fault of one process doesn’t have to affect another process. This can lead to unutilized memory (internal fragmentation) or a process not getting enough memory.
Working-Set Model
Model to give each process enough frames for its work set size, based on previous references.
The WSSi is the total number of pages referenced in the most recent Δ period for Pi. The total demand frames, D, is the sum of all of this variable, being an approximation of locality. If D > m, there will be thrashing that can be handled by suspending or swaping out a process.
Often, a sampling approach is used for approximation due to tracking being expensive. (working set determination)
Working Set
The set of all pages that a process has referenced in the past Δ seconds, Δ being an assigned number, only allowing a fixed number of page references.
Page-Fault Frequency-Based Model
Being more direct in allocation than the working set model, the number of frames allocated is put between an upper bound and lower bound of page faults. With high memory usage, it is below the lower bound, so the amount of frames becomes decreased. With high thrashing, it is above the upper bound, so the amount of frames becomes increased.
File
A logical unit of data on a disk, holding related information. Formally, it is an array of bytes that can be created, read, written, and deleted.
Filename
Regular name of a file for human readability.
i-number
Unique low-level name for each file. Called an i-node in Unix/Linux.
Directory
A special type of file that is a logical container to organize files and other instances of itself.
File System
Kernel software for managing files and directories, making it the manager of secondary storage.
Unstructured Approach
Approach to a file system with no restriction about how files are laid out, arbitrarily data. UNIX and Linux just implement files as a series of bytes.S
Structured Approach
File system approach used by old systems more often, having multiple rules and restrictions. The IBM mainframes implement files as a series of records or objects.
File Attribute
File Control Block: the metadata stored for a file, containing its name, identifier, type, location, size, protection, and its usage time/date. Stored in kernel memory space (in the real world, both storage and memory). Also i-node….? for Linux? And MFT entry for NTFS.
System-Wide Open File Table
A global file table shared by all processes, each entry being an FCB. It contains:
All file information currently open by all processes
File open count
File attributes
File location on disk
Pointers to locations of the files in memory
Per-Process File Table
Table containing a process’s open files. Contains a pointer to the corresponding file in the system-wide open file table, a current position in the file (offset), and permission info. The PCB has a pointer to the table.
File Lock
A mechanism that prevents multiple processes from accessing a file or part of a file concurrently.
Shared Lock - Multiple processes can acquire concurrently, complicated to implement and manage. Often used by database or caching systems
Exclusive Lock - Either mandatory (Windows), where only one process can access at a time, leading to possible deadlock or starvation, or advisory (Unix), where processes can coordinate file accession.
Direct/Random Read
When the OS reads an amount of decided bytes from the file position and puts it into the desired buffer
Sequential Read
When the OS reads a decided amount of bytes from the file position, increments the buffer position from the amount read, and changes the file position based on that.
Sequential Access
When data is processed in order, a byte or record at a time. It’s often used by compilers, editors, and many other programs, but it is inflexible with movement from one spot to another.
Direct Access
Accessing a file’s bytes based on key-values. The file is separated into fixed length logical records that can be read in any order, found by the addresses of their blocks. Used by Linux’s lseek.
Memory-Mapped File
A file that is mapped into main memory, removing the need for write/read calls. However, the file information can be lost while in memory if an error occurs.
Single-Level Directory
Directory structure with 1 directory for every single file, meaning every name is unique. Used by early computers due to small disk sizes.
Two-Level Directory
Directory structure where each user has a separate directory, but each file under a user must have a unique name.
Tree-Structure Directory
A directory structure where the only limitation to directory nesting is disk size. Directories have to be emptied to be deleted.
Hard Link
A new file that serves as another name for an already-existing file, sharing the same i-node. A file, then, can only be deleted when all of these connected to that file are deleted.
Soft/Symbolic Link
A “file” that is really just a pointer from one file to another. Has its own i-node. A file can be deleted before any of these are deleted, making these just broken.
Deadlock
Condition where 2+ threads/processes are waiting for an event that can only be generated by these same threads
Deadlock Detection
Finding instances of deadlock when threads stop making progress
Deadlock Prevention
Imposing restrictions on a program to forbid deadlock from possibly occurring in advance
Deadlock Avoidance
Using algorithms that check resource requests and availability at runtime to not let deadlock occur
Starvation
Thread waiting indefinitely for resources that are being used by other threads
Conditions for deadlock
Mutual exclusion, hold and wait, no preemption, circular wait
Mutual Exclusion (Deadlock)
A resource can only be used by one thread at a time. Deadlock can be prevented by only using sharable resources that don’t need to be locked.
Hold and Wait
At least one thread holds a resource and is also waiting for other resources to become available. Deadlock can be prevented by either having every process have all of its necessary resources before executing, eliminating the need to wait; or by only allowing each process to request a resource when it has none, eliminating the ability to hold.
No preemption
A thread can only release a resource voluntarily, cannot be forced to by the OS or another thread. Deadlock can be prevented by having a process release its resources if it cannot obtain the resources it wants, or having the desired resources preempted from other waiting processes.
Circular Wait
When multiple threads are waiting for each other’s resource, such that the thread that has resource 1 is waiting for resource 2, the thread that has resource 2 is waiting for resource 3, etc until it loops.
Deadlock can be prevented by making it so that, assigning each resource a number, if thread has resource i, then it cannot request resources with a value less than i.
Resource-Allocation Graph (RAG)
A graph used for deadlock detection and avoidance. It has request edges that go from a thread to a resource, indicating that a thread has requested a resource, and assignment edges that go from a resource to a thread, indicating that the resource has been allocated to the thread. If there are no cycles, there is no deadlock.
With deadlock avoidance, it additionally has the claim edge, indicating that a thread might request a resource but not right now. Requests can only be granted if a cycle isn’t formed by doing so.
Banker’s Algorithm
Deadlock avoidance algorithm used for resources with multiple instances. N threads, M resource types. Each thread declares the maximum number of resources it will need, and the objective is to find a safe sequence of resource allocation.
Can also be used with deadlock recovery, looking at current requests against available resources instead of future need against available resources.
Deadlock Recovery
Getting out of deadlock after it already happens.
Resource Preemption (Deadlock Recovery)
Preempting some resources from processes and giving them to other processes until deadlock is over.
Includes victim selection and rollback, can lead to starvation.
Physical Address
A real address in memory, seen by the memory unit
Logical/Virtual Address
An address relative to the start of the processor’s address space. Generated by the CPU.
Address Binding
Assignment of a program’s address (and ways of doing it)
Compile Time Address Binding
The compiler generates the exact physical location in memory of an address for a process
Loading Time Address Binding
The compiler generates an address, but at load time, the loader and linker determine the process’s starting position.
Execution Time Address Binding
The compiler generates an address, and the OS can place it anywhere it wants. When a program calls an address space, the OS needs to translate it from logical to physical through hardware support.
Memory-Management Unit (MMU)
Hardware device that maps logical addresses to physical addresses at run time. It uses a relocation register that is added to every address generated by a user process at the time it is sent to memory.
Contiguous Allocation (Memory)
When segments of memory are allocated to a process so that all of the process is together.
Hole
A block of available memory.
First-Fit Memory Allocation
Allocate a process’s memory to the first large enough hole
Best-Fit Memory Allocation
Allocate the smallest hole that is big enough for a process
Worst-Fit Memory Allocation
Allocate the largest hole that is big enough for a process
External Fragmentation
When there is enough unallocated memory in total to fit a process, but it is not contiguous.
Compaction
Mass relocation of processes to remove external fragmentation, can be done during swapping.
Internal Fragmentation
When allocated memory is slightly larger than requested memory and not being used
Paging
A manner of addressing external fragmentation, dividing up physical memory into fixed-sized blocks called frames and logical memory into blocks of the same size called pages. These pages are located contiguously, but their frames are not.
Page Table
Table to translate a logical address to a physical address with paging. (See notes for address translation).
Page-Table Base Register (PTBR)
A register that points to the page table in memory
Page-Table Length Register (PTLR)
A register that indicates the size of the page table with x86 and ARM
L1 cache
Uses Translation Look-Aside Buffers (TLBs), associative memory, to make page table accessing quicker. It stores page numbers (key) and frames (value). TLBs are erased/flushed every context switch, since the TLB is associated with only one process due to its size.
Effective Access Time (EAT)
Without a TLB, it is equal to 2 times the memory access time (accessing page table and regular memory access)
With a TLB, it is equal to (TLB access time + memory access time) * TLB miss rate + (TLB access time + 2 * memory access time)(1-TLB miss rate)
Or, if M = memory access time, T = TLB access time, and P = TLB miss rate
= (T+M)P + (T+2M)(1-P)
Valid/Invalid Bit
Bit used by page tables and TLB, where one value indicates that an associated page is in the processs logical address space, and the other value indicates that it is not or can’t be confirmed.
Process Arrival with TLB
User creates process that requires N pages
If N frames are available, the OS allocates these frames to the pages
Each page is put in a frame and its frame number is put in the corresponding entry in the page table
TLB is flushed at the beginning of the context switch, marking valid-invalid bits as invalid
OS starts process
OS loads TLB entires as each page is accessed with the process, replacing an existing entry if the TLB is full
PCB with paging
Includes Page Table Register and possible copy of TLB
Context Switch with TLB
Copy PTBR to PCB
Copy TLB to PCB (optional)
Flush TLB (mark all invalid)
Restore PTBR for new process
Restore the TLB for the new process if it was saved
Contiguous Allocation (Disk)
The OS allocates a contiguous chunk of free blocks when it creates a file, the FCB having start location and length. It has similar problems to the same term when talking about memory management, such as when a file changes its size. However, it is easy to implement and head movement is minimized in sequential access, better for a HDD.
Linked Allocation
Each file is allocated a linked list of blocks that are scattered through the disk, with the FCB having a pointed to the first and end block, and each block having a pointer to the next one.
This is better for SSD, having no external fragmentation, file size change, and efficient support for sequential access.
However, there is no random access with a linked list, there is wasted space due to pointers, and it leads to many seek operations, being bad for an HDD.
Indexed Allocation
When a file is allocated multiple scattered blocks, including an index block that has pointers to all other blocks.
It supports random access, has no external fragmentation, and has no real size limit.
The FCB, however, becomes harder to design, and the index block itself is desired to be kept small.
Linked-List Index Blocks
If a file grows out its index files, the OS creates another overflow index block and links it to the last one.
Multi-Level Indexed Allocation
Multiple levels of index blocks that each have links to another level, the last level linking to files, allowing for file size growth. Inefficient with random access of large files, being bad for HDD (lot of seek operations)
Combined Scheme
Modern approach of index blocks that combines the linked list indexed allocation with the multi-level indexed allocation. In EXT there are direct blocks that point to the data, single indirect blocks that link to a level that points to the data, double indirect blocks that link to a two-level system that points to the data, and triple indirect blocks that link to a three-level system that points to the data. As the levels increase, the available space increases.
Super Block
The first block in an EXT-like disk structure, describing the number of i-nodes and data blocks in the file system, as well as where the i-node block begins; info about the file system.
i-node Bitmap
Bitmap of which i-node blocks are free, the 2nd block in an EXT-like system.
Data Bitmap
Bitmap of which data blocks are free, the 3rd block in an EXT-like system.
Steps to Create /foo/bar
Read root i-node
Read root data to find i-node number for foo
Read i-node number for foo
Read foo data to see if a block is available
Read i-node bitmap for bar to find an available i-node block
Write i-node bitmap
Write to foo data block (i-node number for bar)
Read i-node for bar to see if it is ok
Write i-node for bar to update it
Write i-node for foo (foo data block id)
Steps to Write to /foo/bar
Read i-node for /foo/bar
Reada databitmap to find available block
Write to (update) data bitmap flag for data block
Write data to /foo/bar data block
Write /foo/bar i-node block to update access time
Steps to Open /foo/bar (if not accessed recently)
Read root i-node to find root data
Read root data to find foo i-node
Read foo i-node to find foo data
Read foo data to find bar i-node
Read bar i-node
Read foo/bar if accessed recently
Read bar i-node to get bar data
Read bar data
Write to bar i-node to indicate it’s been accessed
Bitmap / Bit Vector of Free Space
A bitmap with one bit for each block on the disk, 1 indicating that a block is free. It’s easy to implement, but suffers with larger storages.
Linked List of Free Space
All free blocks on the disk are held in a linked list, with the head of the list cached in kernel memory. It’s easy to implement and easy to add or remove free blocks, but pointer management leads to overhead and traversal is inefficient.
Crash Consistency
A disk data structure going back to the previous consistent state or being correctly updated after a crash during a write operation.
Crash Consistent Scenarios
Just the data block is written to the disk, and the i-node and bitmap are not updated
The i-node and bitmap are updated, but the data block is not written to the disk.
Crash Inconsistent Scenarios
Either the i-node or the bitmap are updated, but not both. Data block may or may not be updated.
Journaling
Method of creating crash consistency: when updating the disk, create a log of what’s supposed to happen. If a crash occurs during the actual disk write, the OS can fix it based on the log.
Has a TxB and TxE block to indicate the beginning and end of the log.
Data Journaling (Basic)
Journal write the TxB, i-node, bitmap, data block, and TxE blocks all at once.
Checkpoint - Update the actual disk structures