D686: Operating Systems for Computer Scientists (chapter 6 & 7)

0.0(0)
studied byStudied by 46 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/92

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

93 Terms

1
New cards

cycle

Repeating loop

2
New cards

CPU burst

Scheduling process state in which the process executes on CPU.

3
New cards

I/O burst

Scheduling process state in which the CPU performs I/O

4
New cards

CPU scheduler

Kernel routine that selects a thread from the threads that are ready to execute and allocates a core to that thread

5
New cards

nonpreemptive

Under nonpreemptive scheduling, once a core has been allocated to a thread the thread keeps the core until it releases the core either by terminating or by switching to the waiting state.

6
New cards

cooperative

A form of scheduling in which threads volunarily move from the running state

7
New cards

preemptive

A form of scheduling in which processes or threads are involuntarily moved from the running state (by for example a timer signaling the kernel to allow the next thread to run)

8
New cards

dispatcher

The dispatcher is the kernel routine that gives control of a core to the thread selected by the scheduler

9
New cards

dispatch latency

The time it takes for the dispatcher to stop one thread and start another running

10
New cards

Throughput

If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes that are completed per time unit, called throughput.

11
New cards

CPU utilization

the percentage of time the CPU is actively working; the optimal utilization is around 90 percent, as sustained 100 percent use can impact system performance and stability

12
New cards

turnaround time

the total time from process submission to completion, including waiting, execution, and I/O time

13
New cards

waiting time

the time a process spends waiting in the ready queue before it gets CPU time

14
New cards

response time

the time from the submission of a request to the first response being produced by the process, particularly important in interactive systems

15
New cards

First-come first-served (FCFS)

The simplest scheduling algorithm - the thread that requests a core first is allocated the core first, and others following get cores in the order of their requests.

16
New cards

Gantt chart

A bar chart that is used in the text to illustrate a schedule.

17
New cards

convoy effect

A scheduling phenomenon in which threads wait for the one thread to get off a core, causing overall device and CPU utilization to be suboptimal.

18
New cards

shortest-job-first (SJF)

A scheduling algorithm that associates with each thread the length of the threads next CPU burst and schedules the shortest first.

19
New cards

exponential average

A calculation used in scheduling to estimate the next CPU burst time based on the previous burst times (with exponential decay on older values).

20
New cards

shortest-remaining-time-first (SJRF)

Similar to SJF, this scheduling algorithm optimizes for the shortest remaining time until thread completion.

21
New cards

round-robin (RR)

A scheduling algorithm that is designed especially for time-sharing systems - similar to FCFS scheduling, but preemption is added to enable the system to switch between threads.

22
New cards

time quantum

A small unit of time used by scheduling algorithms as a basis for determining when to preempt a thread from the CPU to allow another to run.

23
New cards

time slice

See time quantum

24
New cards

priority-scheduling

A scheduling algorithm in which a priority is associated with each thread and the free CPU core is allocated to the thread with the highest priority.

25
New cards

infinite blocking

See starvation

26
New cards

starvation

A scheduling risk in which a thread that is ready to run never gets put onto the CPU due to the scheduling algorithm - it is starved for CPU time.

27
New cards

aging

Aging is a solution to scheduling starvation and involves gradually increasing the priority of threads as they wait for CPU time.

28
New cards

multilevel queue

A multilevel queue scheduling algorithm partitions the ready queue into several separate queues.

29
New cards

foreground

A thread that is interactive and has input directed to it (such as a window currently selected as active or a terminal window that is currently selected to receive input).

30
New cards

background

A thread that is not currently interactive (has no interactive input directed to it) such as one in a batch job or not currently being used by a user.

31
New cards

multilevel feedback queue

The multilevel feedback queue scheduling algorithm that allows a process to move between queues.

32
New cards

real-time

systems where tasks must be completed within specific time limits 

33
New cards

hard real-time systems

systems where tasks must meet deadlines without exception 

34
New cards

soft real-time systems

systems that are not critical when tasks are important but missing a deadline 

35
New cards

real-time scheduling

the process of organizing tasks so they meet their deadlines 

36
New cards

online scheduler

a scheduler that makes decisions about tasks while the system is running

37
New cards

offline scheduler

a scheduler that plans tasks before the system starts running 

38
New cards

static scheduler

a scheduler that makes a fixed schedule before tasks start 

39
New cards

feasibility tests/schedulability tests

methods to check if tasks in a system can be completed on time 

40
New cards

dynamic scheduler

a scheduler that adjusts the task schedule based on current conditions

41
New cards

preemptive scheduler

scheduler that can interrupt a currently running task to start or resume another task, ensuring that higher-priority tasks receive CPU time as needed 

42
New cards

non-preemptive scheduler

a scheduler that allows a task to run to completion before switching to another task, ensuring that once a task starts, it is not interrupted until it finishes 

43
New cards

multilevel queue

a scheduling algorithm dividing the ready queue into multiple distinct queues

44
New cards

foreground

refers to an interactive thread actively receiving input or engaging with user interaction

45
New cards

background

describes a thread or process not actively receiving user input or interaction, often running in batch mode or idle

46
New cards

multilevel feedback queue

a scheduling algorithm allowing processes to move between different priority queues based on their CPU usage characteristics

47
New cards

symmetric multiprocessing (SMP)

each processor manages its own scheduling, handling both kernel and user threads with potential contention for system resources

48
New cards

asymmetric multiprocessing (AMP)

A system where one processor handles all system tasks and scheduling, while other processors execute only user code

49
New cards

chip multithreading (CMT)

CPUs with multiple cores, each supporting numerous hardware threads, that enhance overall processing efficiency

50
New cards

load balancing

distributing workload evenly across processors in an SMP system to maximize efficiency and prevent idle processors

51
New cards

push migration

load-balancing technique where a task redistributes threads from overloaded processors to those with lighter loads

52
New cards

pull migration

load-balancing method where an idle processor retrieves tasks from busy processors to balance the workload

53
New cards

processor affinity

keeping a thread on the same processor to benefit from its cache and reduce cache invalidation costs

54
New cards

soft affinity

strategy where the operating system aims to keep a thread on the same processor but allows it to migrate if necessary

55
New cards

hard affinity

A strategy where the operating system allows a thread to specify a set of processors on which it can run, ensuring it stays within this set

56
New cards

memory stall

a delay in thread execution when accessing memory that is not currently in the CPU cache, requiring retrieval from main memory

57
New cards

hardware threads

threads that a CPU core can manage, either one per core or multiple, to optimize performance by switching threads during stalls

58
New cards

deadlock

a condition where two or more processes or threads are unable to proceed because each is waiting for the other to release a required resource

59
New cards

exclusive access

permission allowing only one process or thread to access a resource at a time

60
New cards

release

the action of freeing a resource after use.

61
New cards

request

the action of asking for a resource

62
New cards

resources

items or data used by processes in computing, such as memory, CPU time, or files, which need to be managed and shared among different processes

63
New cards

use

the process of utilizing a requested resource

64
New cards

system resource-allocation graph

A directed graph for precise description of deadlocks

65
New cards

request edge

In a system resource-allocation graph, an edge (arrow) indicating a resource request

66
New cards

assignment edge

In a system resource-allocation graph, an edge (arrow) indicating a resource assignment.

67
New cards

assignment edge

a directed edge from a resource to a thread or process, showing resource allocation

68
New cards

circular wait

a set of threads exists where each thread is waiting for a resource held by the next thread in the set, forming a cycle

69
New cards

edges

connections between nodes representing relationships or allocations

70
New cards

graph

a collection of nodes (or vertices) and edges (or lines) connecting them

71
New cards

hold and wait

a thread holds at least one resource while waiting to acquire additional resources held by other threads

72
New cards

mutual exclusion

at least one resource must be held exclusively by one thread at a time, blocking others until released

73
New cards

no preemption

resources cannot be forcibly taken from a thread; they can only be released voluntarily by the thread holding them

74
New cards

nodes (vertices)

entities in the graph representing resources, threads, or processes

75
New cards

request edge

a directed edge from a thread or process to a resource, indicating a request for the resource

76
New cards

system resource-allocation graph

directed graph used to describe deadlock situations precisely

77
New cards

allocation

the process of assigning resources to different tasks or processes

78
New cards

Banker's algorithm

an algorithm that prevents deadlocks by ensuring resource allocation does not lead to an unsafe state

79
New cards

deadlock avoidance

strategies that dynamically allocate resources only if they ensure that the system remains in a safe state where deadlock is impossible

80
New cards

deadlock prevention

techniques to ensure that deadlock cannot occur by systematically denying conditions that lead to deadlock, like holding onto resources indefinitely

81
New cards

dynamic allocation

assigning resources to processes as needed, rather than fixing them in advance

82
New cards

resource manager

system component that controls and allocates resources among processes

83
New cards

resource request

when a process asks for resources, it needs to complete a task

84
New cards

resource release

when a process finishes using a resource and makes it available for others

85
New cards

resource sharing

allowing multiple processes to use the same resource without interfering with each other

86
New cards

safe state

a state where the system can allocate resources to processes in some order and avoid deadlock

87
New cards

static allocation

assigning resources to processes before they start and not changing them during execution

88
New cards

synchronization

coordinating processes to ensure they do not interfere with each other when accessing resources

89
New cards

unsafe state

a state where the system cannot guarantee deadlock avoidance

90
New cards

wait-for graph

In deadlock detection, a variant of the resource-allocation graph with resource nodes removed; indicates a deadlock if the graph contains a cycle

91
New cards

thread dump

In Java, a snapshot of the state of all threads in an application; a useful debugging tool for deadlocks.

92
New cards

recovery mode

A system boot state providing limited services and designed to enable the system admin to repair system problems and debug system startup.

93
New cards

rollback

reverting a process to a previous safe state to recover from