UF OS midterm

studied byStudied by 33 people
5.0(1)
Get a hint
Hint

Algorithm

1 / 146

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

147 Terms

1

Algorithm

Set of instructions with a finite initial store and state, a starting point, and an unambiguous order until an endpoint, known as the halt point.

New cards
2

Operating System (OS)

Manages resources, provides an interface for I/O devices, ensures system security, and facilitates multitasking by managing multiple processes and user interactions.

New cards
3

Program

A sequence of instructions representing an algorithm.

New cards
4

Generation 1 Computers

Used vacuum tubes, were unreliable, and required users to manually load programs using plug boards. They could only run one program at a time.

New cards
5

Generation 2 Computers

Used transistors, which were more reliable and faster than vacuum tubes, and introduced batch processing systems.

New cards
6

Multiprogramming in generation 2B

Kept the CPU busy by managing multiple programs in different phases of execution, forming a pipeline where one program could read, another process, and another write simultaneously.

New cards
7

Generation 3 computers

The use of integrated circuits allowed computers to perform multiple tasks per second, supporting multiple users interacting with the system through terminals, giving the illusion of real-time interaction.

New cards
8

System Resources

The OS manages system resources such as CPU time, working memory (main memory), storage (disks or SSDs), and device input/output (e.g., keyboard, monitors).

New cards
9

Files

Logically named persistent storage used for data or device communication.

New cards
10

Process

A program in execution, with allocated memory and associated resources like stack, pointers, and files.

New cards
11

System Call

A named call that involves switching to kernel mode for execution, needed for protected operations like memory allocation or file access. It's initiated by user programs via system library routines and executed by the OS dispatcher.

New cards
12

Procedure Call

executed entirely in user space, involving local variables and parameters.

New cards
13

POSIX

A set of standards for the design of Unix-compatible operating systems. It defines system calls for process management (e.g., fork, exit) and file operations (e.g., open, close), ensuring consistency across different OS platforms.

New cards
14

Hardware Interrupt

When a device signals the CPU that it needs attention, typically by raising a voltage line.

New cards
15

Precise Interrupts

Occurs after an instruction completes, ensuring a known execution state but potentially delaying the interrupt.

New cards
16

Imprecise Interrupts

May interrupt mid-instruction, allowing faster handling but increasing complexity in resuming partially executed instructions.

New cards
17

Interrupt Controller

Uses interrupt masks, binary values stored in a register, to determine which interrupt signals to listen to. If a new interrupt arrives while another is being handled, the mask will block the new request until the previous one is completed.

New cards
18

Microkernel Architecture

Only core OS functionalities are in kernel space, and modules communicate via message passing, allowing easier updates.

New cards
19

Monolithic Kernel

All OS components like file systems and device drivers are compiled together and operate in kernel space, requiring system calls for access.

New cards
20

Pseudoparallelism

Achieved by time-slicing, where each process is given a small time slot called a quantum to execute on the CPU, even on single-core systems

New cards
21

Two ways process can be terminated, and how do they differ

voluntarily (e.g., when a process completes its instructions or encounters an abnormal exit like file access failure) or involuntarily (e.g., division by zero or receiving a kill signal from another process).

New cards
22

Process Control Block (PCB)

Contains details necessary for managing a process, such as register values, memory management info (text, data, heap, stack pointers), and file management info (file descriptors).

New cards
23

PCB importance

Ensures that the OS can effectively manage, execute, and track processes.

New cards
24

OS Scheduler

Manages process scheduling by determining which process to run during each quantum of time. It responds to interrupts, ensures all processes receive execution time, and prevents starvation by switching processes as needed.

New cards
25

Process States

Running (executing), ready (loaded in memory but not executing), blocked (waiting for an event), initializing (preparing to run), or exiting (cleaning up after execution).

New cards
26

Context Switch

The process of storing the state of a process so that it can be resumed later. Requires the OS to save the current process's CPU register values and update the process's state in the Process Control Block (PCB).

New cards
27

What causes a context switch?

I/O operations, process yielding, or completion of a quantum of execution.

New cards
28

Single-threaded Process

A program in execution with one thread of execution, having its own memory areas, CPU registers, and a stack for procedure calls.

New cards
29

Multi-threaded Process

Each have their own register information and stack, but they share code, data, and file descriptors, making them lightweight processes.

New cards
30

Advantages of Threads

Lower memory costs (shared memory), easier communication between threads, reduced creation/destruction time, and faster context switching compared to processes.

New cards
31

User-space Threads

Managed by the programmer without OS awareness, leading to quicker context switches but potential blocking issues.

New cards
32

Kernel-space Threads

Managed by the OS, allowing individual scheduling and blocking without affecting others.

New cards
33

Race Condition

When two or more processes or threads attempt to perform operations on the same resource simultaneously, leading to undesirable outcomes and data corruption.

New cards
34

Critical Regions

Code sections protected from interruptions, ensuring that operations within them are atomic.

New cards
35

How to avoid race conditions?

Concurrency (no assumptions about CPU speed), safety (no two processes in critical regions simultaneously), liveness (no process blocking others), and no indefinite waiting for process entry into critical regions.

New cards
36

What are the two primary synchronization criteria in process synchronization?

Safety and Liveness. Safety ensures data consistency during access by processes, while Liveness ensures all processes eventually complete.

New cards
37

Shadow Copies

Prevent inconsistent data from being read during a write operation by allowing readers to access an old version of data while the new version is being prepared.

New cards
38

Barriers

Require all processes to reach a specified point before any can continue, ensuring that no process moves forward until all are ready.

New cards
39

Mutexes

Allow only one thread to access a resource at a time and can only be unlocked by the thread that locked them.

New cards
40

Semaphores

Can be incremented or decremented by any thread, allowing for multiple threads to access resources concurrently.

New cards
41

Monitors

Language constructs that use conditional variables to manage access to critical regions, ensuring that only one process can enter at a time and can be implemented implicitly (like in Java) or explicitly (like in Python).

New cards
42

Busy Waiting

Occurs when processes that can't acquire a lock continuously try to do so without blocking, leading to wasted CPU time.

New cards
43

Busy waiting with Flagging Interest

Allows a process to enter the critical region multiple times if the other process shows no interest, avoiding excessive waiting times.

New cards
44

Strict Alternation

A method where two processes take turns entering a critical region, enforced by a shared variable that indicates whose turn it is.

New cards
45

Limitation of Strict Alternation

Can lead to inefficiencies when processes have different execution times, causing one process to wait unnecessarily for the other.

New cards
46

Flagging interest in busy waiting

Allows a process to enter the critical region multiple times if the other process shows no interest, avoiding excessive waiting times.

New cards
47

Atomic Operations (Mutual Exclusion)

Ensure that specific actions, such as checking and setting a lock, are completed as a single operation, preventing other processes from interfering in between.

New cards
48

Mutual Exclusion

Ensures that only one process can access a critical section of code at a time, preventing conflicts and data corruption.

New cards
49

Deadlock

Occurs when two processes are each waiting for the other to release a lock, causing both to be stuck indefinitely.

New cards
50

Peterson Solution

Combines flagging and strict alternation to avoid deadlock by ensuring that processes check both the turn and interest status of the other process before entering the critical region.

New cards
51

Test and Set Lock (TSL)

Atomically checks and sets a lock value, allowing a process to determine if it can acquire the lock without race conditions.

New cards
52

Locking server in message passing for mutual exclusion

Manages lock requests from processes, allowing them to block until a lock is available, facilitating synchronization across potentially distributed systems.

New cards
53

Process scheduling in modern computing systems

Allows multiple programs to run simultaneously (one performing I/O and the other using CPU).

New cards
54

Long-term scheduler (admission scheduler)

Maximizes throughput by managing jobs in batch systems.

New cards
55

Medium-term scheduler (memory scheduler)

Manages memory usage by moving processes between memory and storage.

New cards
56

Short-term scheduler (CPU scheduler)

Decides which process runs next, ensuring efficient CPU usage.

New cards
57

Batch systems

Focus on maximizing throughput.

New cards
58

Interactive systems

Prioritize quick user response times.

New cards
59

Real-time systems

Ensure processes meet strict deadlines predictably.

New cards
60

Preemptive scheduling

Allows the operating system to interrupt a running process to switch to another.

New cards
61

Non-preemptive scheduling

Lets processes run until they complete or voluntarily yield the CPU.

New cards
62

Batch scheduler

Focuses on maximizing CPU utilization and minimizing turnaround time by efficiently managing job throughput.

New cards
63

Turnaround time

Calculated as the difference between a process's finish time and its arrival time.

New cards
64

(NP) First-come, first-serve (FCFS)

Executes processes in the order they arrive.

New cards
65

(NP) Shortest job first

Prioritizes processes that take the least time to complete.

New cards
66

(NP) Priority scheduling

Runs processes based on assigned priority levels.

New cards
67

Crystal ball algorithm

A theoretical, non-implementable benchmark that represents the optimal sequence of process execution.

New cards
68

(P) Shortest Remaining Time First algorithm

This preemptive algorithm selects the process with the shortest remaining execution time to run next. It is optimal in minimizing turnaround time but may not account for context switch costs.

New cards
69

(P) Round Robin algorithm

Scheduling processes run for a fixed time (quantum) before moving to the next process in the queue.

New cards
70

Multiple-Queue Priority scheduling

Categorizes processes into different priority queues, with each queue managed using strategies like Round Robin.

New cards
71

Static scheduling

Pre-defines the order of tasks before execution.

New cards
72

Dynamic scheduling

Makes decisions in real-time based on deadlines and system priorities.

New cards
73

Why does the operating system implement Memory management

To track and allocate memory efficiently, handle memory protection, and convert virtual addresses to physical addresses.

New cards
74

Physical addresses

Absolute locations on memory chips.

New cards
75

Virtual addresses

Relative addresses translated by the OS hardware to physical locations.

New cards
76

Fence register

An early memory protection strategy that separates user program memory from OS memory, blocking access if a process tries to access addresses below the fence value.

New cards
77

Base and limit registers

Define memory boundaries for processes, converting logical addresses to physical addresses during runtime to ensure correct memory access.

New cards
78

Bitmaps in memory management

Represent occupied and free memory blocks using bits.

New cards
79

Linked lists in memory management

Track memory usage with nodes representing occupied and empty blocks.

New cards
80

First fit memory allocation

Allocates the first available space.

New cards
81

Best fit memory allocation

Allocates the smallest suitable space.

New cards
82

Worst fit memory allocation

Allocates the largest space.

New cards
83

External fragmentation

Occurs when small fragments of memory remain between allocated partitions that are too small to be used by other processes.

New cards
84

Compaction

Consolidates free memory by grouping allocated partitions together, eliminating empty spaces between them. Similar to defragmentation in disk drives.

New cards
85

Segmentation

Divides processes into different segments like text, heap, and stack. Requires OS support for swapping segments or pages in and out of memory.

New cards
86

Paging

Divides memory into fixed-size pages. Requires OS support for swapping segments or pages in and out of memory.

New cards
87

What is the function of an overlay manager?

Manages the loading and unloading of memory overlays.

New cards
88

What does an overlay manager control?

It controls which program segments are in memory at any time.

New cards
89

In a monolithic kernel architecture, most modules exist in user space, and changes to them require recompiling the entire operating system.

False

New cards
90

A process is defined as a program in execution, and it includes pointers, counters, and variables.

True

New cards
91

In a typical representation of a process in memory, the stack grows upwards and the heap grows downwards.

False

New cards
92

Processes in a microkernel architecture communicate primarily through message passing, reducing the need for system calls.

True

New cards
93

The process control block (PCB) contains information about the process's file management, register values, and much more.

True

New cards
94

The scheduler is a part of the operating system responsible for selecting which process runs next, balancing the needs of processes to prevent starvation.

True

New cards
95

The scheduler sets an alarm to manage process execution time, ensuring that a process runs only for its allocated quantum of time.

True

New cards
96

A process in the "Ready" state is runnable but not currently executing on the CPU.

True

New cards
97

A context switch occurs only due to voluntary CPU yield operations, such as a process going to sleep.

False

New cards
98

A context switch involves saving the state of the currently running process so it can resume later.

True

New cards
99

Kernel-space threads can have threads with different states (e.g., ready, blocked).

True

New cards
100

Threads in a multithreaded process share code, data, and file descriptors, but have their own register information and stack.

True

New cards

Explore top notes

note Note
studied byStudied by 10 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 2895 people
Updated ... ago
5.0 Stars(21)
note Note
studied byStudied by 5 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 4 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 10 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 11 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 81 people
Updated ... ago
5.0 Stars(1)
note Note
studied byStudied by 96 people
Updated ... ago
4.5 Stars(4)

Explore top flashcards

flashcards Flashcard303 terms
studied byStudied by 7 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard64 terms
studied byStudied by 33 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard20 terms
studied byStudied by 17 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard28 terms
studied byStudied by 14 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard60 terms
studied byStudied by 46 people
Updated ... ago
5.0 Stars(2)
flashcards Flashcard63 terms
studied byStudied by 9 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard21 terms
studied byStudied by 20 people
Updated ... ago
5.0 Stars(1)
flashcards Flashcard46 terms
studied byStudied by 9 people
Updated ... ago
5.0 Stars(4)