1/63
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Memory Management
The OS function responsible for tracking memory usage, allocating memory to processes, and deallocating it when finished. It ensures efficient use of memory and protection between processes. /n/n
Swapping
A memory management technique where a process is temporarily moved from main memory to disk (backing store) and later brought back for execution, enabling more processes than physical memory allows. /n/n
Base and Limit Registers
Hardware registers used for dynamic relocation and protection. The base register holds the starting physical address, and the limit register defines the size of the address space; every memory access is checked against them. /n/n
Fixed Partitioning
Memory is divided into fixed-size partitions at system startup. Each process is loaded into a partition. Simple but leads to internal fragmentation and limited number of processes. /n/n
Dynamic Partitioning
Memory is allocated dynamically based on process size. Eliminates internal fragmentation but introduces external fragmentation and may require compaction. /n/n
Internal Fragmentation
Wasted space inside an allocated memory block when the allocated partition is larger than the process needs. /n/n
External Fragmentation
Free memory exists but is scattered in small non-contiguous blocks, making allocation difficult. /n/n
Compaction
A technique to reduce external fragmentation by moving processes so that free memory becomes contiguous. /n/n
First-Fit Algorithm
Allocates the first memory block large enough for the request. Fast and simple but can create large holes. /n/n
Best-Fit Algorithm
Allocates the smallest block that fits the request. Minimizes wasted space but creates many small unusable holes and is slower. /n/n
Next-Fit Algorithm
Similar to first-fit but continues searching from the last allocated position. /n/n
Worst-Fit Algorithm
Allocates the largest available block to leave large leftover holes. Generally inefficient in practice. /n/n
Logical Address
An address generated by the CPU that is independent of the actual physical memory location. /n/n
Physical Address
The real address in main memory where data resides. /n/n
Address Translation
The process of converting a logical address into a physical address using hardware like the MMU. /n/n
MMU (Memory Management Unit)
Hardware that performs runtime address translation from logical to physical addresses. /n/n
Paging
Memory management scheme where memory is divided into fixed-size frames and processes into pages. Pages can be loaded into any frame, eliminating external fragmentation. /n/n
Page Table
A data structure maintained by the OS that maps logical page numbers to physical frame numbers. /n/n
Page Fault
Occurs when a process tries to access a page not currently in memory, triggering OS intervention. /n/n
Virtual Memory
Technique allowing execution of processes without requiring all pages to be in memory, using disk as an extension of RAM. /n/n
Working Set
The set of pages a process is actively using. Helps determine how many frames a process needs. /n/n
Thrashing
A condition where excessive paging occurs, causing the system to spend more time swapping than executing processes. /n/n
TLB (Translation Lookaside Buffer)
A fast cache that stores recent page table entries to speed up address translation. /n/n
Page Replacement Algorithms
Strategies used to decide which page to remove when memory is full. /n/n
Optimal Page Replacement
Replaces the page that will not be used for the longest time in the future. Gives best performance but not implementable in practice. /n/n
LRU (Least Recently Used)
Replaces the page that has not been used for the longest time in the past. /n/n
FCFS Page Replacement
Replaces the oldest loaded page (First-In-First-Out). Simple but can perform poorly (Belady's anomaly). /n/n
Buffer Overflow
A vulnerability where a program writes more data than allocated memory, potentially overwriting adjacent memory and causing security issues. /n/n
Unsafe Functions
Functions like gets() or strcpy() that do not check bounds and can lead to buffer overflow vulnerabilities. /n/n
CPU Scheduling
The process of selecting which process gets CPU time next in a multiprogramming system. /n/n
Scheduler
OS component responsible for selecting the next process to execute. /n/n
Dispatcher
Module that switches context, transfers CPU control to selected process, and switches to user mode. /n/n
Dispatch Latency
Time required to stop one process and start another. /n/n
Preemptive Scheduling
The OS can interrupt a running process to switch to another. /n/n
Non-Preemptive Scheduling
A running process keeps the CPU until it finishes or blocks. /n/n
CPU Utilization
Percentage of time CPU is actively working; should be maximized. /n/n
Throughput
Number of processes completed per unit time; should be maximized. /n/n
Turnaround Time
Total time from submission to completion of a process; should be minimized. /n/n
Waiting Time
Total time a process spends waiting in the ready queue. /n/n
Response Time
Time from request submission to first response (important in interactive systems). /n/n
FCFS Scheduling
Non-preemptive algorithm where processes are executed in arrival order. Simple but causes convoy effect. /n/n
Convoy Effect
Long process delays many short processes in FCFS scheduling. /n/n
SJF (Shortest Job First)
Non-preemptive algorithm that runs the process with the shortest CPU burst. Optimal for minimizing waiting time. /n/n
SRT (Shortest Remaining Time)
Preemptive version of SJF that switches when a shorter job arrives. /n/n
Round Robin (RR)
Preemptive scheduling with fixed time slices (quantum). Ensures fairness and responsiveness. /n/n
Priority Scheduling
Processes are scheduled based on priority. Can cause starvation of low-priority processes. /n/n
Multilevel Queue Scheduling
Processes are divided into different queues based on type, each with its own scheduling algorithm. /n/n
Multilevel Feedback Queue
Processes can move between queues based on behavior and CPU usage, improving flexibility and fairness. /n/n
CPU-bound Process
A process that spends most time performing computations. /n/n
I/O-bound Process
A process that spends most time waiting for I/O operations. /n/n
Short-term Scheduler
Selects which process in the ready queue gets CPU next. /n/n
Long-term Scheduler
Decides which processes are admitted into the system. /n/n
Medium-term Scheduler
Handles swapping decisions (suspending/resuming processes). /n/n
Thread
The smallest unit of execution within a process, sharing memory but having its own registers and stack. /n/n
Multithreading
The ability of a process to execute multiple threads concurrently. /n/n
User-Level Threads
Threads managed by user libraries without kernel awareness. Fast but limited by blocking system calls. /n/n
Kernel-Level Threads
Threads managed by the OS kernel. More flexible but slower due to system calls. /n/n
Advantages of Multithreading
Improved responsiveness, resource sharing, and better CPU utilization. /n/n
Disadvantages of Multithreading
Complexity, synchronization issues, race conditions, and debugging difficulty. /n/n
Race Condition
Occurs when multiple threads access shared data concurrently and results depend on execution order. /n/n
Thread Synchronization
Techniques (mutex, locks, semaphores) used to control access to shared resources. /n/n
Refactoring Sequential to Multithreaded
Splitting a single-threaded task into multiple threads to run concurrently for performance improvement. /n/n
Thread Bugs
Common issues include race conditions, deadlocks, and improper synchronization. /n/n
Deadlock
A situation where threads are stuck waiting for each other indefinitely. /n/n