1/44
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Read/Write Head
A component of a disk drive that reads data from and writes data to the disk surface. It moves back and forth across the disk to access different data tracks.
Sector
A subdivision of a track on a disk, typically containing a fixed amount of data, used to organize data storage. It is the smallest addressable storage unit on a disk
Track
A circular path on a disk where data is recorded, consisting of multiple sectors, allowing the read/write head to access information.
Cylinder
A set of tracks located at the same position on multiple disk surfaces, allowing for efficient data retrieval as the read/write head moves between layers.
Spindle
A central axis on a disk drive that spins the disks, enabling the read/write heads to access data stored on the surfaces.
Process
A program in execution, consisting of code, a program counter, registers, stack, and heap. Each process operates in its own address space managed by the OS.
Critical Section
A section of code where a process accesses shared resources that must not be accessed by more than one process at the same time.
Examples:
– Updating a shared counter variable.
– Writing to a shared log file.
Mutual Exclusion
Ensures that only one process executes its critical section at any time.
Solution: Use a mutex lock or binary semaphore to prevent concurrent access.
Progress
If no process is in the critical section, one waiting process should be allowed to enter next.
Solution: Fair selection (e.g. using flags or turn variables).
Bounded Waiting
Guarantees that there is a limit on how many times other processes can enter their critical section before a waiting process gets access.
Solution: Maintain a queue or counter to ensure fairness.
Atomic Instruction
A hardware-level operation that executes as a single, indivisible unit — cannot be interrupted.
Common operations:
Test-and-Set (checks and sets a lock).
Compare-and-Swap (compares a value and swaps if equal).
Fetch-and-Add (increments atomically).
Wait() / Signal() (used for semaphores).
Capability Lists
Each subject (process) maintains a list of objects it can access and the rights (e.g., read/write/execute) for each. This is a subject-oriented approach to access control.
Access Control Lists
Each object maintains a list of subjects that can access it and their permissions. This is an object-oriented approach to access control.
Semaphore
A synchronisation primitive using a counter to control access to shared resources.
Types:
Binary semaphore (mutex): Only 0 or 1, used for exclusive access.
Counting semaphore: Allows multiple resources to be shared up to a limit.
Mutex
A lock that allows only one thread or process to access a resource at a time. When a thread locks a mutex, others must wait until it is released.
Access Matrix
A conceptual model defining rights of subjects (rows) over objects (columns), with each cell listing the allowed operations.
Owner (Access Matrix)
The subject who controls and can modify the access rights for a specific object.
Copy Operation (Access Matrix)
Allows a subject to grant its access right to another subject, effectively duplicating the permission.
Control (Access Matrix)
Enables a subject to modify access rights for other subjects for a particular object.
Transfer (Access Matrix)
Moves a right from one subject to another, removing it from the original holder.
Type 0 Hypervisor
Runs directly on hardware firmware or bare metal with OS features built in (e.g., IBM z/VM). Provides high efficiency and isolation.
Type 1 Hypervisor
Runs directly on hardware, managing multiple guest OSes without a host OS layer. Examples: VMware ESXi, Microsoft Hyper-V.
Type 2 Hypervisor
Runs as an application within a host OS. Easier to use but adds overhead due to host OS dependency. Examples: VirtualBox, VMware Workstation.
Paravirtualisation
Guest OS is modified to communicate directly with the hypervisor using hypercalls, reducing overhead from emulation (e.g., Xen).
Emulation
Software mimics hardware instructions, allowing one CPU architecture to run code meant for another. Offers compatibility but is slow.
Container
Lightweight form of virtualisation where applications share the host OS kernel but have isolated user-space environments (e.g., Docker, Kubernetes Pods).
Copy-On-Write
Memory optimisation technique where processes initially share the same pages; copies are made only when a process modifies a page.
Anonymous Memory
Memory not backed by a file, created dynamically for processes (e.g., heap or stack segments, memory allocated via malloc).
Caching (Kernel I/O Subsystem)
Stores recently accessed data in memory to speed up future access to the same data, reducing disk I/O.
Example: File system cache in Linux.
Spooling (Kernel I/O Subsystem)
Holds data in a temporary queue for a device that can’t handle concurrent I/O operations.
Example: Print spooler queuing print jobs.
Blocking I/O
The process waits until the I/O operation completes before continuing.
Advantage: Simple programming model.
Disadvantage: CPU time wasted while waiting.
Non-Blocking I/O
The process initiates the I/O and continues execution without waiting; must check periodically if I/O completed.
Advantage: Better responsiveness.
Disadvantage: Requires active polling and complex control flow.
Asynchronous I/O
The process issues an I/O request and continues; it is notified (e.g., via interrupt or callback) when the operation completes.
Advantage: Maximises concurrency.
Disadvantage: Complex implementation and coordination.
Thread Library
Provides APIs to create, manage, and synchronise threads.
Implementations:
User-level threads: Managed entirely in user space.
Kernel-level threads: Managed by the OS kernel.
Hardware Thread
A logical execution unit supported by CPU hardware (e.g., Intel Hyper-Threading) that shares core resources but executes separate instruction streams.
User Thread
Threads managed by user-space libraries. Faster to create and switch, but if one thread blocks, all in the process may block.
Kernel Thread
Threads managed by the OS kernel. Can be scheduled on multiple processors but have higher overhead for creation and context switching.
Demand Paging
A page is loaded into memory only when it is first accessed (on page fault).
Hardware support required:
Page table
Valid/invalid bit
Secondary storage (swap space) for missing pages.
Page Thrashing
Occurs when the system spends more time swapping pages in and out of memory than executing processes.
Avoid by: Increasing physical memory, using working set model, or better replacement algorithms (e.g., LRU).
CPU Emulation
Software that reproduces CPU behaviour to run programs designed for a different instruction set (e.g., QEMU). Often used for compatibility or debugging.
Virtualisation
Abstraction of physical resources (CPU, memory, storage, network) into virtual instances. Enables running multiple isolated operating systems or applications on a single physical machine.
Spinlock
A lock where the thread repeatedly checks (spins) until the lock becomes available. Efficient on multiprocessors where waiting times are short.
Starvation in Synchronisation
Occurs when a process is perpetually denied access to resources.
Avoid by: Fair scheduling (FIFO), aging mechanisms, or round-robin access.
Priority Inversion
When a low-priority process holds a lock needed by a high-priority process, causing priority blocking.
Solution: Priority inheritance (temporarily boost low-priority process).
Deadlock
A set of processes permanently blocked, each waiting for a resource held by another. Occurs when all four conditions (mutual exclusion, hold and wait, no preemption, circular wait) are true.