1/146
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Algorithm
Set of instructions with a finite initial store and state, a starting point, and an unambiguous order until an endpoint, known as the halt point.
Operating System (OS)
Manages resources, provides an interface for I/O devices, ensures system security, and facilitates multitasking by managing multiple processes and user interactions.
Program
A sequence of instructions representing an algorithm.
Generation 1 Computers
Used vacuum tubes, were unreliable, and required users to manually load programs using plug boards. They could only run one program at a time.
Generation 2 Computers
Used transistors, which were more reliable and faster than vacuum tubes, and introduced batch processing systems.
Multiprogramming in generation 2B
Kept the CPU busy by managing multiple programs in different phases of execution, forming a pipeline where one program could read, another process, and another write simultaneously.
Generation 3 computers
The use of integrated circuits allowed computers to perform multiple tasks per second, supporting multiple users interacting with the system through terminals, giving the illusion of real-time interaction.
System Resources
The OS manages system resources such as CPU time, working memory (main memory), storage (disks or SSDs), and device input/output (e.g., keyboard, monitors).
Files
Logically named persistent storage used for data or device communication.
Process
A program in execution, with allocated memory and associated resources like stack, pointers, and files.
System Call
A named call that involves switching to kernel mode for execution, needed for protected operations like memory allocation or file access. It's initiated by user programs via system library routines and executed by the OS dispatcher.
Procedure Call
executed entirely in user space, involving local variables and parameters.
POSIX
A set of standards for the design of Unix-compatible operating systems. It defines system calls for process management (e.g., fork, exit) and file operations (e.g., open, close), ensuring consistency across different OS platforms.
Hardware Interrupt
When a device signals the CPU that it needs attention, typically by raising a voltage line.
Precise Interrupts
Occurs after an instruction completes, ensuring a known execution state but potentially delaying the interrupt.
Imprecise Interrupts
May interrupt mid-instruction, allowing faster handling but increasing complexity in resuming partially executed instructions.
Interrupt Controller
Uses interrupt masks, binary values stored in a register, to determine which interrupt signals to listen to. If a new interrupt arrives while another is being handled, the mask will block the new request until the previous one is completed.
Microkernel Architecture
Only core OS functionalities are in kernel space, and modules communicate via message passing, allowing easier updates.
Monolithic Kernel
All OS components like file systems and device drivers are compiled together and operate in kernel space, requiring system calls for access.
Pseudoparallelism
Achieved by time-slicing, where each process is given a small time slot called a quantum to execute on the CPU, even on single-core systems
Two ways process can be terminated, and how do they differ
voluntarily (e.g., when a process completes its instructions or encounters an abnormal exit like file access failure) or involuntarily (e.g., division by zero or receiving a kill signal from another process).
Process Control Block (PCB)
Contains details necessary for managing a process, such as register values, memory management info (text, data, heap, stack pointers), and file management info (file descriptors).
PCB importance
Ensures that the OS can effectively manage, execute, and track processes.
OS Scheduler
Manages process scheduling by determining which process to run during each quantum of time. It responds to interrupts, ensures all processes receive execution time, and prevents starvation by switching processes as needed.
Process States
Running (executing), ready (loaded in memory but not executing), blocked (waiting for an event), initializing (preparing to run), or exiting (cleaning up after execution).
Context Switch
The process of storing the state of a process so that it can be resumed later. Requires the OS to save the current process's CPU register values and update the process's state in the Process Control Block (PCB).
What causes a context switch?
I/O operations, process yielding, or completion of a quantum of execution.
Single-threaded Process
A program in execution with one thread of execution, having its own memory areas, CPU registers, and a stack for procedure calls.
Multi-threaded Process
Each have their own register information and stack, but they share code, data, and file descriptors, making them lightweight processes.
Advantages of Threads
Lower memory costs (shared memory), easier communication between threads, reduced creation/destruction time, and faster context switching compared to processes.
User-space Threads
Managed by the programmer without OS awareness, leading to quicker context switches but potential blocking issues.
Kernel-space Threads
Managed by the OS, allowing individual scheduling and blocking without affecting others.
Race Condition
When two or more processes or threads attempt to perform operations on the same resource simultaneously, leading to undesirable outcomes and data corruption.
Critical Regions
Code sections protected from interruptions, ensuring that operations within them are atomic.
How to avoid race conditions?
Concurrency (no assumptions about CPU speed), safety (no two processes in critical regions simultaneously), liveness (no process blocking others), and no indefinite waiting for process entry into critical regions.
What are the two primary synchronization criteria in process synchronization?
Safety and Liveness. Safety ensures data consistency during access by processes, while Liveness ensures all processes eventually complete.
Shadow Copies
Prevent inconsistent data from being read during a write operation by allowing readers to access an old version of data while the new version is being prepared.
Barriers
Require all processes to reach a specified point before any can continue, ensuring that no process moves forward until all are ready.
Mutexes
Allow only one thread to access a resource at a time and can only be unlocked by the thread that locked them.
Semaphores
Can be incremented or decremented by any thread, allowing for multiple threads to access resources concurrently.
Monitors
Language constructs that use conditional variables to manage access to critical regions, ensuring that only one process can enter at a time and can be implemented implicitly (like in Java) or explicitly (like in Python).
Busy Waiting
Occurs when processes that can't acquire a lock continuously try to do so without blocking, leading to wasted CPU time.
Busy waiting with Flagging Interest
Allows a process to enter the critical region multiple times if the other process shows no interest, avoiding excessive waiting times.
Strict Alternation
A method where two processes take turns entering a critical region, enforced by a shared variable that indicates whose turn it is.
Limitation of Strict Alternation
Can lead to inefficiencies when processes have different execution times, causing one process to wait unnecessarily for the other.
Flagging interest in busy waiting
Allows a process to enter the critical region multiple times if the other process shows no interest, avoiding excessive waiting times.
Atomic Operations (Mutual Exclusion)
Ensure that specific actions, such as checking and setting a lock, are completed as a single operation, preventing other processes from interfering in between.
Mutual Exclusion
Ensures that only one process can access a critical section of code at a time, preventing conflicts and data corruption.
Deadlock
Occurs when two processes are each waiting for the other to release a lock, causing both to be stuck indefinitely.
Peterson Solution
Combines flagging and strict alternation to avoid deadlock by ensuring that processes check both the turn and interest status of the other process before entering the critical region.
Test and Set Lock (TSL)
Atomically checks and sets a lock value, allowing a process to determine if it can acquire the lock without race conditions.
Locking server in message passing for mutual exclusion
Manages lock requests from processes, allowing them to block until a lock is available, facilitating synchronization across potentially distributed systems.
Process scheduling in modern computing systems
Allows multiple programs to run simultaneously (one performing I/O and the other using CPU).
Long-term scheduler (admission scheduler)
Maximizes throughput by managing jobs in batch systems.
Medium-term scheduler (memory scheduler)
Manages memory usage by moving processes between memory and storage.
Short-term scheduler (CPU scheduler)
Decides which process runs next, ensuring efficient CPU usage.
Batch systems
Focus on maximizing throughput.
Interactive systems
Prioritize quick user response times.
Real-time systems
Ensure processes meet strict deadlines predictably.
Preemptive scheduling
Allows the operating system to interrupt a running process to switch to another.
Non-preemptive scheduling
Lets processes run until they complete or voluntarily yield the CPU.
Batch scheduler
Focuses on maximizing CPU utilization and minimizing turnaround time by efficiently managing job throughput.
Turnaround time
Calculated as the difference between a process's finish time and its arrival time.
(NP) First-come, first-serve (FCFS)
Executes processes in the order they arrive.
(NP) Shortest job first
Prioritizes processes that take the least time to complete.
(NP) Priority scheduling
Runs processes based on assigned priority levels.
Crystal ball algorithm
A theoretical, non-implementable benchmark that represents the optimal sequence of process execution.
(P) Shortest Remaining Time First algorithm
This preemptive algorithm selects the process with the shortest remaining execution time to run next. It is optimal in minimizing turnaround time but may not account for context switch costs.
(P) Round Robin algorithm
Scheduling processes run for a fixed time (quantum) before moving to the next process in the queue.
Multiple-Queue Priority scheduling
Categorizes processes into different priority queues, with each queue managed using strategies like Round Robin.
Static scheduling
Pre-defines the order of tasks before execution.
Dynamic scheduling
Makes decisions in real-time based on deadlines and system priorities.
Why does the operating system implement Memory management
To track and allocate memory efficiently, handle memory protection, and convert virtual addresses to physical addresses.
Physical addresses
Absolute locations on memory chips.
Virtual addresses
Relative addresses translated by the OS hardware to physical locations.
Fence register
An early memory protection strategy that separates user program memory from OS memory, blocking access if a process tries to access addresses below the fence value.
Base and limit registers
Define memory boundaries for processes, converting logical addresses to physical addresses during runtime to ensure correct memory access.
Bitmaps in memory management
Represent occupied and free memory blocks using bits.
Linked lists in memory management
Track memory usage with nodes representing occupied and empty blocks.
First fit memory allocation
Allocates the first available space.
Best fit memory allocation
Allocates the smallest suitable space.
Worst fit memory allocation
Allocates the largest space.
External fragmentation
Occurs when small fragments of memory remain between allocated partitions that are too small to be used by other processes.
Compaction
Consolidates free memory by grouping allocated partitions together, eliminating empty spaces between them. Similar to defragmentation in disk drives.
Segmentation
Divides processes into different segments like text, heap, and stack. Requires OS support for swapping segments or pages in and out of memory.
Paging
Divides memory into fixed-size pages. Requires OS support for swapping segments or pages in and out of memory.
What is the function of an overlay manager?
Manages the loading and unloading of memory overlays.
What does an overlay manager control?
It controls which program segments are in memory at any time.
In a monolithic kernel architecture, most modules exist in user space, and changes to them require recompiling the entire operating system.
False
A process is defined as a program in execution, and it includes pointers, counters, and variables.
True
In a typical representation of a process in memory, the stack grows upwards and the heap grows downwards.
False
Processes in a microkernel architecture communicate primarily through message passing, reducing the need for system calls.
True
The process control block (PCB) contains information about the process's file management, register values, and much more.
True
The scheduler is a part of the operating system responsible for selecting which process runs next, balancing the needs of processes to prevent starvation.
True
The scheduler sets an alarm to manage process execution time, ensuring that a process runs only for its allocated quantum of time.
True
A process in the "Ready" state is runnable but not currently executing on the CPU.
True
A context switch occurs only due to voluntary CPU yield operations, such as a process going to sleep.
False
A context switch involves saving the state of the currently running process so it can resume later.
True
Kernel-space threads can have threads with different states (e.g., ready, blocked).
True
Threads in a multithreaded process share code, data, and file descriptors, but have their own register information and stack.
True