CA216 Operating Systems Concepts and Mechanisms

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/180

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

181 Terms

1
New cards

Process

Has its own memory space and is when the program is executed.

2
New cards

Thread

Shares memory with other threads in the same process.

3
New cards

Use-case for fork() + exec()

A shell launching a command: fork to create a new process, then exec to load the command.

4
New cards

kill

Sends a signal to a process.

5
New cards

strace

Traces system calls and signals made by a program.

6
New cards

Amdahl's Law formula

S = 1 / ((1 - P) + P / N), where P = parallel fraction, N = processors.

7
New cards

Semaphore

A signaling mechanism to control access to shared resources.

8
New cards

Busy waiting/spinlock

A process constantly checks a condition.

9
New cards

Deadlock conditions

Mutual exclusion, Hold and wait, No preemption, Circular wait.

10
New cards

Prevention vs Avoidance Deadlock

Prevention: Break one of the four conditions; Avoidance: Use safe states (e.g., Banker's algorithm).

11
New cards

IPC mechanisms in Linux

Pipes, Shared Memory, Message Queues, Semaphores.

12
New cards

Paging

Process address space is broken down into blocks of same size. Size of process can be measured by number of pages.

13
New cards

TLB (Translation Lookaside Buffer)

special memory cache that holds translations

14
New cards

Impacts of large vs small page sizes

Small: more overhead from page tables; Large: internal fragmentation.

15
New cards

Wildcard Expansion

The shell expands the wildcard and passes the result to the program, not the string '*'.

16
New cards

Adding Directory to PATH

export PATH=$PATH:$HOME/bin

17
New cards

wait() Function

It makes the parent process wait until the child process finishes.

18
New cards

What is execlp() Behavior

It replaces the current process image; code after it won't run if it succeeds.

19
New cards

Process Lifecycle Transition

It was interrupted, usually by a scheduler for time-sharing.

20
New cards

Thread Race Condition

Lack of atomic operations or synchronization.

21
New cards

Fixing Race Conditions in C

Use atomic_int or mutexes for shared variable updates.

22
New cards

Dining Philosophers Problem

It introduces asymmetry, preventing circular wait.

23
New cards

Inode

A data structure that stores information about a file except its name.

24
New cards

Internal Fragmentation

Wasted space within allocated memory blocks due to allocation rounding.

25
New cards

Process Control Block (PCB)

It contains process information like PID, registers, scheduling info, memory info, etc.

26
New cards

Heap Usage Reporting

Heap size is dynamic and allocated at runtime.

27
New cards

Direct Memory Access (DMA)

Hardware feature allowing devices to access memory without CPU involvement.

28
New cards

Conditions for Deadlock

Mutual exclusion, Hold and wait, No preemption, Circular wait.

29
New cards

Amdahl's Law

Limits on performance gain from parallelization due to serial portions of code.

30
New cards

Safe State in Resource Allocation

The system can avoid deadlock by safely allocating resources in some order.

31
New cards

Difference Between Process and Thread

A process is an independent program with its own memory space and resources. A thread is a lightweight unit of execution within a process, sharing the process's memory and resources.

32
New cards

What is fork()

Creates a new child process by duplicating the parent process.

33
New cards

Process Control Block (PCB) Definition

The PCB is a data structure in the OS that stores information about a process, including process ID, state, CPU registers, memory info, and open files.

34
New cards

Linux Process Life-Cycle States

New , Ready , Running, Waiting, Terminated.

35
New cards

Initialized Data

Global/static variables with initial values, e.g., int b=2.

36
New cards

Uninitialized Data

Global/static variables without values

37
New cards

Heap

Dynamic memory allocation

38
New cards

Stack

Local variables and function calls.

39
New cards

Paging

Divides virtual memory into fixed-size pages and physical memory into frames, with a page table mapping virtual addresses to physical addresses.

40
New cards

Page Table

Maps virtual addresses (page number + offset) to physical addresses.

41
New cards

Swapping

Moves pages to disk when memory is full.

42
New cards

Internal Fragmentation

Occurs when allocated memory is larger than needed, wasting space within the block.

43
New cards

Translation Lookaside Buffer (TLB)

A cache that stores recent page table mappings to speed up virtual-to-physical address translation.

44
New cards

Difference between Paging and Swapping

Paging divides memory into pages for efficient allocation; swapping moves entire processes or pages to disk when memory is full.

45
New cards

Amdahl's Law

Calculates parallel program speedup: Speedup = 1 / (s + (1-s)/N), where s is the serial proportion and N is the number of processors.

46
New cards

Preemptive Scheduling

Allows the OS to interrupt and switch processes.

47
New cards

Non-Preemptive Scheduling

Runs a process to completion or until it yields.

48
New cards

Shortest-Remaining-Time-First (SRTF) Scheduling

A preemptive scheduler that prioritizes the process with the shortest remaining burst time.

49
New cards

Processor Affinity

Assigns a process to a specific CPU to improve cache performance by reducing cache misses.

50
New cards

Critical Section Problem

Code accessing shared resources that requires mutual exclusion to prevent race conditions.

51
New cards

Race Condition

Occurs when multiple processes/threads access shared resources concurrently, leading to unpredictable outcomes.

52
New cards

Test_and_Set

An atomic instruction for synchronization that sets a boolean and returns its old value atomically.

53
New cards

Compare_and_Swap

An atomic instruction that updates a value if it matches an expected value.

54
New cards

Deadlock

It occurs when processes hold resources and wait for others, forming a cycle.

55
New cards

Deadlock Example

Two processes each hold one resource and request the other.

56
New cards

Deadlock Prevention

Prevention eliminates one of the four deadlock conditions (e.g., break circular wait by resource ordering).

57
New cards

Deadlock Avoidance (Banker’s Algorithm)

The Banker's Algorithm is a deadlock avoidance method that checks if granting a resource request keeps the system in a safe state by dynamically checking resource allocation, ensuring no deadlock occurs.

58
New cards

OS Recovery from Deadlock

Terminate one or more processes, roll back a process to release resources, preempt resources, or restart the system.

59
New cards

Dining Philosophers Problem

Five philosophers alternate between thinking and eating, needing two forks (shared resources).

60
New cards

Dining Philosophers Solution

Use semaphores or resource ordering.

61
New cards

Inode

An inode is a filesystem structure storing file metadata (permissions, owner, size, timestamps) and pointers to data blocks, but not the file name.

62
New cards

pstree Command

pstree displays a tree of processes, showing parent-child relationships.

63
New cards

Direct Memory Access (DMA)

DMA allows devices to transfer data to/from memory without CPU involvement, improving performance over programmed I/O.

64
New cards

Dynamically Linked Library (DLL)

A DDL is loaded at runtime, shared among programs.

65
New cards

Busy Waiting/Spinlock

When a process repeatedly checks a condition, wasting CPU.

66
New cards

Bootstrap Program

Loads at power-up, initializes hardware, and loads the OS kernel.

67
New cards

Cache locality

Assigning a process to a specific CPU to leverage cache locality, improving performance.

68
New cards

Dividing by zero

An interrupt is generated, and the OS terminates the program, preventing system crashes.

69
New cards

Foreground process

A process currently open, visible on the display, and has keyboard or other I/O directed to it.

70
New cards

Kernel mode

Allows all instructions to be executed, with only the kernel running, providing full control over system resources.

71
New cards

Context switch

It is when the OS saves the state of a running process via its PCB and loads another process's state to run, enabling multitasking.

72
New cards

Multi-threaded program performance

Overhead from thread creation/management and synchronisation can slow performance.

73
New cards

What is Compaction in memory management

It rearranges memory to consolidate free space into a single large block, reducing external fragmentation.

74
New cards

Cache memory

It is small, fast memory between the processor and main memory, storing frequently accessed data to speed up CPU operations.

75
New cards

External fragmentation

It occurs when free memory is scattered in small blocks, insufficient for new processes despite total free space being adequate.

76
New cards

Unequal-sized partitions

Fit different-sized processes better, cuts down on wasted memory space and makes memory use more efficient than equal-sized partitions.

77
New cards

Linux size command and heap size

size command reports static sections at compile time, heap is dynamically allocated at runtime

78
New cards

Round Robin scheduling

Preemptive scheduling that assigns each process a time quantum in a cyclic order, ensuring no process is permanently overlooked, though delays may occur with many processes.

79
New cards

What is Starvation in scheduling

It occurs when a runnable process is indefinitely overlooked by the scheduler despite being able to proceed.

80
New cards

Livelock

When threads continuously attempt an action that fails, preventing progress, unlike a deadlock where threads are blocked.

81
New cards

Monitor

A synchronisation construct that ensures mutual exclusion for shared resources, allowing only one thread to execute its critical section at a time.

82
New cards

Atomic operation

An indivisible instruction that completes without interruption, ensuring no race conditions.

83
New cards

circular wait condition for deadlock

Occurs when a set of processes form a cycle, each holding a resource and waiting for the next process's resource.

84
New cards

resource-allocation graph for deadlock

A system is deadlocked if a cycle exists in the resource-allocation graph with only assignment edges.

85
New cards

sequential vs. random access in file systems

Sequential access reads/writes data in order. Random access allows direct access to any data part.

86
New cards

benefits and drawbacks of single vs. multiple file copies

Single copy: Saves space, ensures consistency; drawback: contention delays. Multiple copies: Faster access, fault tolerance; drawback: inconsistency if not synchronized.

87
New cards

strace command

System calls and signals made by a program, useful for debugging or monitoring. Example: open(), read()

88
New cards

pipe in operating systems

A logical conduit for inter-process communication, allowing data to flow from one process's output to another's input.

89
New cards

asynchronous thread

Created by a parent thread, which resumes execution, allowing both to run concurrently and independently.

90
New cards

logical address

A virtual address generated by the CPU, mapped to a physical address by the MMU.

91
New cards

shared memory and message passing

Shared memory allows processes to communicate via a common memory region. Message passing sends data via OS-mediated messages.

92
New cards

privileged instructions

Restricted to kernel mode, managing critical operations like interrupts or I/O. They can be attempted from user mode

93
New cards

dispatcher in an operating system

The OS component that selects and switches to the next process to run, performing the context switch.

94
New cards

What is unbounded buffer

It has no practical size limit, allowing producers to add items without waiting for space.

95
New cards

New state

Process is initialized. Moves to Ready when resources are allocated.

96
New cards

Ready state

Process is waiting for CPU. Moves to Running when scheduled.

97
New cards

Running state

Process is executing. Moves to Ready if pre-empted. Moves to Waiting if it needs I/O or an event. Moves to Terminated if it completes

98
New cards

Waiting state

Process awaits an event. Moves to Ready when the event occurs.

99
New cards

Termination state

Process has finished but its parent hasn’t reaped it. Fully exits after parent calls wait().

100
New cards

Text

Stores the program’s executable code