D686: Operating Systems for Computer Scientists (chapter 6 & 7)

studied byStudied by 4 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 92

encourage image

There's no tags or description

Looks like no one added any tags here yet for you.

93 Terms

1

cycle

Repeating loop

New cards
2

CPU burst

Scheduling process state in which the process executes on CPU.

New cards
3

I/O burst

Scheduling process state in which the CPU performs I/O

New cards
4

CPU scheduler

Kernel routine that selects a thread from the threads that are ready to execute and allocates a core to that thread

New cards
5

nonpreemptive

Under nonpreemptive scheduling, once a core has been allocated to a thread the thread keeps the core until it releases the core either by terminating or by switching to the waiting state.

New cards
6

cooperative

A form of scheduling in which threads volunarily move from the running state

New cards
7

preemptive

A form of scheduling in which processes or threads are involuntarily moved from the running state (by for example a timer signaling the kernel to allow the next thread to run)

New cards
8

dispatcher

The dispatcher is the kernel routine that gives control of a core to the thread selected by the scheduler

New cards
9

dispatch latency

The time it takes for the dispatcher to stop one thread and start another running

New cards
10

Throughput

If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput.

New cards
11

CPU utilization

the percentage of time the CPU is actively working; the optimal utilization is around 90 percent, as sustained 100 percent use can impact system performance and stability

New cards
12

turnaround time

the total time from process submission to completion, including waiting, execution, and I/O time

New cards
13

waiting time

the time a process spends waiting in the ready queue before it gets CPU time

New cards
14

response time

the time from the submission of a request to the first response being produced by the process, particularly important in interactive systems

New cards
15

First-come first-served (FCFS)

The simplest scheduling algorithm - the thread that requests a core first is allocated the core first, and others following get cores in the order of their requests.

New cards
16

Gantt chart

A bar chart that is used in the text to illustrate a schedule.

New cards
17

convoy effect

A scheduling phenomenon in which threads wait for the one thread to get off a core, causing overall device and CPU utilization to be suboptimal.

New cards
18

shortest-job-first (SJF)

A scheduling algorithm that associates with each thread the length of the threads next CPU burst and schedules the shortest first.

New cards
19

exponential average

A calculation used in scheduling to estimate the next CPU burst time based on the previous burst times (with exponential decay on older values).

New cards
20

shortest-remaining-time-first (SJRF)

Similar to SJF, this scheduling algorithm optimizes for the shortest remaining time until thread completion.

New cards
21

round-robin (RR)

A scheduling algorithm that is designed especially for time-sharing systems - similar to FCFS scheduling, but preemption is added to enable the system to switch between threads.

New cards
22

time quantum

A small unit of time used by scheduling algorithms as a basis for determining when to preempt a thread from the CPU to allow another to run.

New cards
23

time slice

See time quantum

New cards
24

priority-scheduling

A scheduling algorithm in which a priority is associated with each thread and the free CPU core is allocated to the thread with the highest priority.

New cards
25

infinite blocking

See starvation

New cards
26

starvation

A scheduling risk in which a thread that is ready to run never gets put onto the CPU due to the scheduling algorithm - it is starved for CPU time.

New cards
27

aging

Aging is a solution to scheduling starvation and involves gradually increasing the priority of threads as they wait for CPU time.

New cards
28

multilevel queue

A multilevel queue scheduling algorithm partitions the ready queue into several separate queues.

New cards
29

foreground

A thread that is interactive and has input directed to it (such as a window currently selected as active or a terminal window that is currently selected to receive input).

New cards
30

background

A thread that is not currently interactive (has no interactive input directed to it) such as one in a batch job or not currently being used by a user.

New cards
31

multilevel feedback queue

The multilevel feedback queue scheduling algorithm that allows a process to move between queues.

New cards
32

real-time

systems where tasks must be completed within specific time limits 

New cards
33

hard real-time systems

systems where tasks must meet deadlines without exception 

New cards
34

soft real-time systems

systems that are not critical when tasks are important but missing a deadline 

New cards
35

real-time scheduling

the process of organizing tasks so they meet their deadlines 

New cards
36

online scheduler

a scheduler that makes decisions about tasks while the system is running

New cards
37

offline scheduler

a scheduler that plans tasks before the system starts running 

New cards
38

static scheduler

a scheduler that makes a fixed schedule before tasks start 

New cards
39

feasibility tests/schedulability tests

methods to check if tasks in a system can be completed on time 

New cards
40

dynamic scheduler

a scheduler that adjusts the task schedule based on current conditions

New cards
41

preemptive scheduler

scheduler that can interrupt a currently running task to start or resume another task, ensuring that higher-priority tasks receive CPU time as needed 

New cards
42

non-preemptive scheduler

a scheduler that allows a task to run to completion before switching to another task, ensuring that once a task starts, it is not interrupted until it finishes 

New cards
43

multilevel queue

a scheduling algorithm dividing the ready queue into multiple distinct queues

New cards
44

foreground

refers to an interactive thread actively receiving input or engaging with user interaction

New cards
45

background

describes a thread or process not actively receiving user input or interaction, often running in batch mode or idle

New cards
46

multilevel feedback queue

a scheduling algorithm allowing processes to move between different priority queues based on their CPU usage characteristics

New cards
47

symmetric multiprocessing (SMP)

each processor manages its own scheduling, handling both kernel and user threads with potential contention for system resources

New cards
48

asymmetric multiprocessing (AMP)

A system where one processor handles all system tasks and scheduling, while other processors execute only user code

New cards
49

chip multithreading (CMT)

CPUs with multiple cores, each supporting numerous hardware threads, that enhance overall processing efficiency

New cards
50

load balancing

distributing workload evenly across processors in an SMP system to maximize efficiency and prevent idle processors

New cards
51

push migration

load-balancing technique where a task redistributes threads from overloaded processors to those with lighter loads

New cards
52

pull migration

load-balancing method where an idle processor retrieves tasks from busy processors to balance the workload

New cards
53

processor affinity

keeping a thread on the same processor to benefit from its cache and reduce cache invalidation costs

New cards
54

soft affinity

strategy where the operating system aims to keep a thread on the same processor but allows it to migrate if necessary

New cards
55

hard affinity

A strategy where the operating system allows a thread to specify a set of processors on which it can run, ensuring it stays within this set

New cards
56

memory stall

a delay in thread execution when accessing memory that is not currently in the CPU cache, requiring retrieval from main memory

New cards
57

hardware threads

threads that a CPU core can manage, either one per core or multiple, to optimize performance by switching threads during stalls

New cards
58

deadlock

a condition where two or more processes or threads are unable to proceed because each is waiting for the other to release a required resource

New cards
59

exclusive access

permission allowing only one process or thread to access a resource at a time

New cards
60

release

the action of freeing a resource after use.

New cards
61

request

the action of asking for a resource

New cards
62

resources

items or data used by processes in computing, such as memory, CPU time, or files, which need to be managed and shared among different processes

New cards
63

use

the process of utilizing a requested resource

New cards
64

system resource-allocation graph

A directed graph for precise description of deadlocks

New cards
65

request edge

In a system resource-allocation graph, an edge (arrow) indicating a resource request

New cards
66

assignment edge

In a system resource-allocation graph, an edge (arrow) indicating a resource assignment.

New cards
67

assignment edge

a directed edge from a resource to a thread or process, showing resource allocation

New cards
68

circular wait

a set of threads exists where each thread is waiting for a resource held by the next thread in the set, forming a cycle

New cards
69

edges

connections between nodes representing relationships or allocations

New cards
70

graph

a collection of nodes (or vertices) and edges (or lines) connecting them

New cards
71

hold and wait

a thread holds at least one resource while waiting to acquire additional resources held by other threads

New cards
72

mutual exclusion

at least one resource must be held exclusively by one thread at a time, blocking others until released

New cards
73

no preemption

resources cannot be forcibly taken from a thread; they can only be released voluntarily by the thread holding them

New cards
74

nodes (vertices)

entities in the graph representing resources, threads, or processes

New cards
75

request edge

a directed edge from a thread or process to a resource, indicating a request for the resource

New cards
76

system resource-allocation graph

directed graph used to describe deadlock situations precisely

New cards
77

allocation

the process of assigning resources to different tasks or processes

New cards
78

Banker's algorithm

an algorithm that prevents deadlocks by ensuring resource allocation does not lead to an unsafe state

New cards
79

deadlock avoidance

strategies that dynamically allocate resources only if they ensure that the system remains in a safe state where deadlock is impossible

New cards
80

deadlock prevention

techniques to ensure that deadlock cannot occur by systematically denying conditions that lead to deadlock, like holding onto resources indefinitely

New cards
81

dynamic allocation

assigning resources to processes as needed, rather than fixing them in advance

New cards
82

resource manager

system component that controls and allocates resources among processes

New cards
83

resource request

when a process asks for resources, it needs to complete a task

New cards
84

resource release

when a process finishes using a resource and makes it available for others

New cards
85

resource sharing

allowing multiple processes to use the same resource without interfering with each other

New cards
86

safe state

a state where the system can allocate resources to processes in some order and avoid deadlock

New cards
87

static allocation

assigning resources to processes before they start and not changing them during execution

New cards
88

synchronization

coordinating processes to ensure they do not interfere with each other when accessing resources

New cards
89

unsafe state

a state where the system cannot guarantee deadlock avoidance

New cards
90

wait-for graph

In deadlock detection, a variant of the resource-allocation graph with resource nodes removed; indicates a deadlock if the graph contains a cycle

New cards
91

thread dump

In Java, a snapshot of the state of all threads in an application; a useful debugging tool for deadlocks.

New cards
92

recovery mode

A system boot state providing limited services and designed to enable the system admin to repair system problems and debug system startup.

New cards
93

rollback

reverting a process to a previous safe state to recover from

New cards
robot