UF OS midterm

5.0(1)
studied byStudied by 47 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/146

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

147 Terms

1
New cards

Algorithm

Set of instructions with a finite initial store and state, a starting point, and an unambiguous order until an endpoint, known as the halt point.

2
New cards

Operating System (OS)

Manages resources, provides an interface for I/O devices, ensures system security, and facilitates multitasking by managing multiple processes and user interactions.

3
New cards

Program

A sequence of instructions representing an algorithm.

4
New cards

Generation 1 Computers

Used vacuum tubes, were unreliable, and required users to manually load programs using plug boards. They could only run one program at a time.

5
New cards

Generation 2 Computers

Used transistors, which were more reliable and faster than vacuum tubes, and introduced batch processing systems.

6
New cards

Multiprogramming in generation 2B

Kept the CPU busy by managing multiple programs in different phases of execution, forming a pipeline where one program could read, another process, and another write simultaneously.

7
New cards

Generation 3 computers

The use of integrated circuits allowed computers to perform multiple tasks per second, supporting multiple users interacting with the system through terminals, giving the illusion of real-time interaction.

8
New cards

System Resources

The OS manages system resources such as CPU time, working memory (main memory), storage (disks or SSDs), and device input/output (e.g., keyboard, monitors).

9
New cards

Files

Logically named persistent storage used for data or device communication.

10
New cards

Process

A program in execution, with allocated memory and associated resources like stack, pointers, and files.

11
New cards

System Call

A named call that involves switching to kernel mode for execution, needed for protected operations like memory allocation or file access. It's initiated by user programs via system library routines and executed by the OS dispatcher.

12
New cards

Procedure Call

executed entirely in user space, involving local variables and parameters.

13
New cards

POSIX

A set of standards for the design of Unix-compatible operating systems. It defines system calls for process management (e.g., fork, exit) and file operations (e.g., open, close), ensuring consistency across different OS platforms.

14
New cards

Hardware Interrupt

When a device signals the CPU that it needs attention, typically by raising a voltage line.

15
New cards

Precise Interrupts

Occurs after an instruction completes, ensuring a known execution state but potentially delaying the interrupt.

16
New cards

Imprecise Interrupts

May interrupt mid-instruction, allowing faster handling but increasing complexity in resuming partially executed instructions.

17
New cards

Interrupt Controller

Uses interrupt masks, binary values stored in a register, to determine which interrupt signals to listen to. If a new interrupt arrives while another is being handled, the mask will block the new request until the previous one is completed.

18
New cards

Microkernel Architecture

Only core OS functionalities are in kernel space, and modules communicate via message passing, allowing easier updates.

19
New cards

Monolithic Kernel

All OS components like file systems and device drivers are compiled together and operate in kernel space, requiring system calls for access.

20
New cards

Pseudoparallelism

Achieved by time-slicing, where each process is given a small time slot called a quantum to execute on the CPU, even on single-core systems

21
New cards

Two ways process can be terminated, and how do they differ

voluntarily (e.g., when a process completes its instructions or encounters an abnormal exit like file access failure) or involuntarily (e.g., division by zero or receiving a kill signal from another process).

22
New cards

Process Control Block (PCB)

Contains details necessary for managing a process, such as register values, memory management info (text, data, heap, stack pointers), and file management info (file descriptors).

23
New cards

PCB importance

Ensures that the OS can effectively manage, execute, and track processes.

24
New cards

OS Scheduler

Manages process scheduling by determining which process to run during each quantum of time. It responds to interrupts, ensures all processes receive execution time, and prevents starvation by switching processes as needed.

25
New cards

Process States

Running (executing), ready (loaded in memory but not executing), blocked (waiting for an event), initializing (preparing to run), or exiting (cleaning up after execution).

26
New cards

Context Switch

The process of storing the state of a process so that it can be resumed later. Requires the OS to save the current process's CPU register values and update the process's state in the Process Control Block (PCB).

27
New cards

What causes a context switch?

I/O operations, process yielding, or completion of a quantum of execution.

28
New cards

Single-threaded Process

A program in execution with one thread of execution, having its own memory areas, CPU registers, and a stack for procedure calls.

29
New cards

Multi-threaded Process

Each have their own register information and stack, but they share code, data, and file descriptors, making them lightweight processes.

30
New cards

Advantages of Threads

Lower memory costs (shared memory), easier communication between threads, reduced creation/destruction time, and faster context switching compared to processes.

31
New cards

User-space Threads

Managed by the programmer without OS awareness, leading to quicker context switches but potential blocking issues.

32
New cards

Kernel-space Threads

Managed by the OS, allowing individual scheduling and blocking without affecting others.

33
New cards

Race Condition

When two or more processes or threads attempt to perform operations on the same resource simultaneously, leading to undesirable outcomes and data corruption.

34
New cards

Critical Regions

Code sections protected from interruptions, ensuring that operations within them are atomic.

35
New cards

How to avoid race conditions?

Concurrency (no assumptions about CPU speed), safety (no two processes in critical regions simultaneously), liveness (no process blocking others), and no indefinite waiting for process entry into critical regions.

36
New cards

What are the two primary synchronization criteria in process synchronization?

Safety and Liveness. Safety ensures data consistency during access by processes, while Liveness ensures all processes eventually complete.

37
New cards

Shadow Copies

Prevent inconsistent data from being read during a write operation by allowing readers to access an old version of data while the new version is being prepared.

38
New cards

Barriers

Require all processes to reach a specified point before any can continue, ensuring that no process moves forward until all are ready.

39
New cards

Mutexes

Allow only one thread to access a resource at a time and can only be unlocked by the thread that locked them.

40
New cards

Semaphores

Can be incremented or decremented by any thread, allowing for multiple threads to access resources concurrently.

41
New cards

Monitors

Language constructs that use conditional variables to manage access to critical regions, ensuring that only one process can enter at a time and can be implemented implicitly (like in Java) or explicitly (like in Python).

42
New cards

Busy Waiting

Occurs when processes that can't acquire a lock continuously try to do so without blocking, leading to wasted CPU time.

43
New cards

Busy waiting with Flagging Interest

Allows a process to enter the critical region multiple times if the other process shows no interest, avoiding excessive waiting times.

44
New cards

Strict Alternation

A method where two processes take turns entering a critical region, enforced by a shared variable that indicates whose turn it is.

45
New cards

Limitation of Strict Alternation

Can lead to inefficiencies when processes have different execution times, causing one process to wait unnecessarily for the other.

46
New cards

Flagging interest in busy waiting

Allows a process to enter the critical region multiple times if the other process shows no interest, avoiding excessive waiting times.

47
New cards

Atomic Operations (Mutual Exclusion)

Ensure that specific actions, such as checking and setting a lock, are completed as a single operation, preventing other processes from interfering in between.

48
New cards

Mutual Exclusion

Ensures that only one process can access a critical section of code at a time, preventing conflicts and data corruption.

49
New cards

Deadlock

Occurs when two processes are each waiting for the other to release a lock, causing both to be stuck indefinitely.

50
New cards

Peterson Solution

Combines flagging and strict alternation to avoid deadlock by ensuring that processes check both the turn and interest status of the other process before entering the critical region.

51
New cards

Test and Set Lock (TSL)

Atomically checks and sets a lock value, allowing a process to determine if it can acquire the lock without race conditions.

52
New cards

Locking server in message passing for mutual exclusion

Manages lock requests from processes, allowing them to block until a lock is available, facilitating synchronization across potentially distributed systems.

53
New cards

Process scheduling in modern computing systems

Allows multiple programs to run simultaneously (one performing I/O and the other using CPU).

54
New cards

Long-term scheduler (admission scheduler)

Maximizes throughput by managing jobs in batch systems.

55
New cards

Medium-term scheduler (memory scheduler)

Manages memory usage by moving processes between memory and storage.

56
New cards

Short-term scheduler (CPU scheduler)

Decides which process runs next, ensuring efficient CPU usage.

57
New cards

Batch systems

Focus on maximizing throughput.

58
New cards

Interactive systems

Prioritize quick user response times.

59
New cards

Real-time systems

Ensure processes meet strict deadlines predictably.

60
New cards

Preemptive scheduling

Allows the operating system to interrupt a running process to switch to another.

61
New cards

Non-preemptive scheduling

Lets processes run until they complete or voluntarily yield the CPU.

62
New cards

Batch scheduler

Focuses on maximizing CPU utilization and minimizing turnaround time by efficiently managing job throughput.

63
New cards

Turnaround time

Calculated as the difference between a process's finish time and its arrival time.

64
New cards

(NP) First-come, first-serve (FCFS)

Executes processes in the order they arrive.

65
New cards

(NP) Shortest job first

Prioritizes processes that take the least time to complete.

66
New cards

(NP) Priority scheduling

Runs processes based on assigned priority levels.

67
New cards

Crystal ball algorithm

A theoretical, non-implementable benchmark that represents the optimal sequence of process execution.

68
New cards

(P) Shortest Remaining Time First algorithm

This preemptive algorithm selects the process with the shortest remaining execution time to run next. It is optimal in minimizing turnaround time but may not account for context switch costs.

69
New cards

(P) Round Robin algorithm

Scheduling processes run for a fixed time (quantum) before moving to the next process in the queue.

70
New cards

Multiple-Queue Priority scheduling

Categorizes processes into different priority queues, with each queue managed using strategies like Round Robin.

71
New cards

Static scheduling

Pre-defines the order of tasks before execution.

72
New cards

Dynamic scheduling

Makes decisions in real-time based on deadlines and system priorities.

73
New cards

Why does the operating system implement Memory management

To track and allocate memory efficiently, handle memory protection, and convert virtual addresses to physical addresses.

74
New cards

Physical addresses

Absolute locations on memory chips.

75
New cards

Virtual addresses

Relative addresses translated by the OS hardware to physical locations.

76
New cards

Fence register

An early memory protection strategy that separates user program memory from OS memory, blocking access if a process tries to access addresses below the fence value.

77
New cards

Base and limit registers

Define memory boundaries for processes, converting logical addresses to physical addresses during runtime to ensure correct memory access.

78
New cards

Bitmaps in memory management

Represent occupied and free memory blocks using bits.

79
New cards

Linked lists in memory management

Track memory usage with nodes representing occupied and empty blocks.

80
New cards

First fit memory allocation

Allocates the first available space.

81
New cards

Best fit memory allocation

Allocates the smallest suitable space.

82
New cards

Worst fit memory allocation

Allocates the largest space.

83
New cards

External fragmentation

Occurs when small fragments of memory remain between allocated partitions that are too small to be used by other processes.

84
New cards

Compaction

Consolidates free memory by grouping allocated partitions together, eliminating empty spaces between them. Similar to defragmentation in disk drives.

85
New cards

Segmentation

Divides processes into different segments like text, heap, and stack. Requires OS support for swapping segments or pages in and out of memory.

86
New cards

Paging

Divides memory into fixed-size pages. Requires OS support for swapping segments or pages in and out of memory.

87
New cards

What is the function of an overlay manager?

Manages the loading and unloading of memory overlays.

88
New cards

What does an overlay manager control?

It controls which program segments are in memory at any time.

89
New cards

In a monolithic kernel architecture, most modules exist in user space, and changes to them require recompiling the entire operating system.

False

90
New cards

A process is defined as a program in execution, and it includes pointers, counters, and variables.

True

91
New cards

In a typical representation of a process in memory, the stack grows upwards and the heap grows downwards.

False

92
New cards

Processes in a microkernel architecture communicate primarily through message passing, reducing the need for system calls.

True

93
New cards

The process control block (PCB) contains information about the process's file management, register values, and much more.

True

94
New cards

The scheduler is a part of the operating system responsible for selecting which process runs next, balancing the needs of processes to prevent starvation.

True

95
New cards

The scheduler sets an alarm to manage process execution time, ensuring that a process runs only for its allocated quantum of time.

True

96
New cards

A process in the "Ready" state is runnable but not currently executing on the CPU.

True

97
New cards

A context switch occurs only due to voluntary CPU yield operations, such as a process going to sleep.

False

98
New cards

A context switch involves saving the state of the currently running process so it can resume later.

True

99
New cards

Kernel-space threads can have threads with different states (e.g., ready, blocked).

True

100
New cards

Threads in a multithreaded process share code, data, and file descriptors, but have their own register information and stack.

True