Basic Terms/Definitions From 2020-2024 Exams

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/44

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

45 Terms

1
New cards

Read/Write Head

A component of a disk drive that reads data from and writes data to the disk surface. It moves back and forth across the disk to access different data tracks.

2
New cards

Sector

A subdivision of a track on a disk, typically containing a fixed amount of data, used to organize data storage. It is the smallest addressable storage unit on a disk

3
New cards

Track

A circular path on a disk where data is recorded, consisting of multiple sectors, allowing the read/write head to access information.

4
New cards

Cylinder

A set of tracks located at the same position on multiple disk surfaces, allowing for efficient data retrieval as the read/write head moves between layers.

5
New cards

Spindle

A central axis on a disk drive that spins the disks, enabling the read/write heads to access data stored on the surfaces.

6
New cards

Process

A program in execution, consisting of code, a program counter, registers, stack, and heap. Each process operates in its own address space managed by the OS.

7
New cards

Critical Section

A section of code where a process accesses shared resources that must not be accessed by more than one process at the same time.
Examples:
– Updating a shared counter variable.
– Writing to a shared log file.

8
New cards

Mutual Exclusion

Ensures that only one process executes its critical section at any time.
Solution: Use a mutex lock or binary semaphore to prevent concurrent access.

9
New cards

Progress

If no process is in the critical section, one waiting process should be allowed to enter next.
Solution: Fair selection (e.g. using flags or turn variables).

10
New cards

Bounded Waiting

Guarantees that there is a limit on how many times other processes can enter their critical section before a waiting process gets access.
Solution: Maintain a queue or counter to ensure fairness.

11
New cards

Atomic Instruction

A hardware-level operation that executes as a single, indivisible unit — cannot be interrupted.
Common operations:

  • Test-and-Set (checks and sets a lock).

  • Compare-and-Swap (compares a value and swaps if equal).

  • Fetch-and-Add (increments atomically).

  • Wait() / Signal() (used for semaphores).

12
New cards

Capability Lists

Each subject (process) maintains a list of objects it can access and the rights (e.g., read/write/execute) for each. This is a subject-oriented approach to access control.

13
New cards

Access Control Lists

Each object maintains a list of subjects that can access it and their permissions. This is an object-oriented approach to access control.

14
New cards

Semaphore

A synchronisation primitive using a counter to control access to shared resources.
Types:

  • Binary semaphore (mutex): Only 0 or 1, used for exclusive access.

  • Counting semaphore: Allows multiple resources to be shared up to a limit.

15
New cards

Mutex

A lock that allows only one thread or process to access a resource at a time. When a thread locks a mutex, others must wait until it is released.

16
New cards

Access Matrix

A conceptual model defining rights of subjects (rows) over objects (columns), with each cell listing the allowed operations.

17
New cards

Owner (Access Matrix)

The subject who controls and can modify the access rights for a specific object.

18
New cards

Copy Operation (Access Matrix)

Allows a subject to grant its access right to another subject, effectively duplicating the permission.

19
New cards

Control (Access Matrix)

Enables a subject to modify access rights for other subjects for a particular object.

20
New cards

Transfer (Access Matrix)

Moves a right from one subject to another, removing it from the original holder.

21
New cards

Type 0 Hypervisor

Runs directly on hardware firmware or bare metal with OS features built in (e.g., IBM z/VM). Provides high efficiency and isolation.

22
New cards

Type 1 Hypervisor

Runs directly on hardware, managing multiple guest OSes without a host OS layer. Examples: VMware ESXi, Microsoft Hyper-V.

23
New cards

Type 2 Hypervisor

Runs as an application within a host OS. Easier to use but adds overhead due to host OS dependency. Examples: VirtualBox, VMware Workstation.

24
New cards

Paravirtualisation

Guest OS is modified to communicate directly with the hypervisor using hypercalls, reducing overhead from emulation (e.g., Xen).

25
New cards

Emulation

Software mimics hardware instructions, allowing one CPU architecture to run code meant for another. Offers compatibility but is slow.

26
New cards

Container

Lightweight form of virtualisation where applications share the host OS kernel but have isolated user-space environments (e.g., Docker, Kubernetes Pods).

27
New cards

Copy-On-Write

Memory optimisation technique where processes initially share the same pages; copies are made only when a process modifies a page.

28
New cards

Anonymous Memory

Memory not backed by a file, created dynamically for processes (e.g., heap or stack segments, memory allocated via malloc).

29
New cards

Caching (Kernel I/O Subsystem)

Stores recently accessed data in memory to speed up future access to the same data, reducing disk I/O.
Example: File system cache in Linux.

30
New cards

Spooling (Kernel I/O Subsystem)

Holds data in a temporary queue for a device that can’t handle concurrent I/O operations.
Example: Print spooler queuing print jobs.

31
New cards

Blocking I/O

The process waits until the I/O operation completes before continuing.
Advantage: Simple programming model.
Disadvantage: CPU time wasted while waiting.

32
New cards

Non-Blocking I/O

The process initiates the I/O and continues execution without waiting; must check periodically if I/O completed.
Advantage: Better responsiveness.
Disadvantage: Requires active polling and complex control flow.

33
New cards

Asynchronous I/O

The process issues an I/O request and continues; it is notified (e.g., via interrupt or callback) when the operation completes.
Advantage: Maximises concurrency.
Disadvantage: Complex implementation and coordination.

34
New cards

Thread Library

Provides APIs to create, manage, and synchronise threads.
Implementations:

  • User-level threads: Managed entirely in user space.

  • Kernel-level threads: Managed by the OS kernel.

35
New cards

Hardware Thread

A logical execution unit supported by CPU hardware (e.g., Intel Hyper-Threading) that shares core resources but executes separate instruction streams.

36
New cards

User Thread

Threads managed by user-space libraries. Faster to create and switch, but if one thread blocks, all in the process may block.

37
New cards

Kernel Thread

Threads managed by the OS kernel. Can be scheduled on multiple processors but have higher overhead for creation and context switching.

38
New cards

Demand Paging

A page is loaded into memory only when it is first accessed (on page fault).
Hardware support required:

  1. Page table

  2. Valid/invalid bit

  3. Secondary storage (swap space) for missing pages.

39
New cards

Page Thrashing

Occurs when the system spends more time swapping pages in and out of memory than executing processes.
Avoid by: Increasing physical memory, using working set model, or better replacement algorithms (e.g., LRU).

40
New cards

CPU Emulation

Software that reproduces CPU behaviour to run programs designed for a different instruction set (e.g., QEMU). Often used for compatibility or debugging.

41
New cards

Virtualisation

Abstraction of physical resources (CPU, memory, storage, network) into virtual instances. Enables running multiple isolated operating systems or applications on a single physical machine.

42
New cards

Spinlock

A lock where the thread repeatedly checks (spins) until the lock becomes available. Efficient on multiprocessors where waiting times are short.

43
New cards

Starvation in Synchronisation

Occurs when a process is perpetually denied access to resources.
Avoid by: Fair scheduling (FIFO), aging mechanisms, or round-robin access.

44
New cards

Priority Inversion

When a low-priority process holds a lock needed by a high-priority process, causing priority blocking.
Solution: Priority inheritance (temporarily boost low-priority process).

45
New cards

Deadlock

A set of processes permanently blocked, each waiting for a resource held by another. Occurs when all four conditions (mutual exclusion, hold and wait, no preemption, circular wait) are true.