D686 - Operating Systems - Chapter 6

0.0(0)
studied byStudied by 0 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/55

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

56 Terms

1
New cards

cycle

Repeating loop

2
New cards

CPU burst

Scheduling process state in which the process executes on CPU.

3
New cards

I/O burst

Scheduling process state in which the CPU performs I/O.

4
New cards

CPU scheduler

Kernel routine that selects a thread from the threads that are ready to execute and allocates a core to that thread.

5
New cards

nonpreemptive

Under nonpreemptive scheduling, once a core has been allocated to a thread the thread keeps the core until it releases the core either by terminating or by switching to the waiting state.

6
New cards

cooperative

A form of scheduling in which threads volunarily move from the running state.

7
New cards

preemptive

A form of scheduling in which processes or threads are involuntarily moved from the running state (by for example a timer signaling the kernel to allow the next thread to run).

8
New cards

____ is the number of processes that are completed per time unit.

Throughput

9
New cards

throughput

Generally the amount of work done over time. In scheduling, the number of threads completed per unit time.

10
New cards

First-come first-served (FCFS)

The simplest scheduling algorithm - the thread that requests a core first is allocated the core first, and others following get cores in the order of their requests.

11
New cards

Gantt chart

A bar chart that is used in the text to illustrate a schedule.

12
New cards

convoy effect

A scheduling phenomenon in which threads wait for the one thread to get off a core, causing overall device and CPU utilization to be suboptimal.

13
New cards

shortest-job-first (SJF)

A scheduling algorithm that associates with each thread the length of the threads next CPU burst and schedules the shortest first.

14
New cards

exponential average

A calculation used in scheduling to estimate the next CPU burst time based on the previous burst times (with exponential decay on older values).

15
New cards

shortest-remaining-time-first (SJRF)

Similar to SJF, this scheduling algorithm optimizes for the shortest remaining time until thread completion.

16
New cards

round-robin (RR)

A scheduling algorithm that is designed especially for time-sharing systems - similar to FCFS scheduling, but preemption is added to enable the system to switch between threads.

17
New cards

time quantum/slice

A small unit of time used by scheduling algorithms as a basis for determining when to preempt a thread from the CPU to allow another to run.

18
New cards

priority-scheduling

A scheduling algorithm in which a priority is associated with each thread and the free CPU core is allocated to the thread with the highest priority.

19
New cards

starvation/infinite blocking

A scheduling risk in which a thread that is ready to run never gets put onto the CPU due to the scheduling algorithm - it is starved for CPU time.

20
New cards

aging

A solution to scheduling starvation and involves gradually increasing the priority of threads as they wait for CPU time.

21
New cards

real-time

systems where tasks must be completed within specific time limits

22
New cards

hard real-time systems

systems where tasks must meet deadlines without exception

23
New cards

soft real-time systems

systems that are not critical when tasks are important but missing a deadline

24
New cards

real-time scheduling 

the process of organizing tasks so they meet their deadlines

25
New cards

online scheduler

a scheduler that makes decisions about tasks while the system is running

26
New cards

offline scheduler

a scheduler that plans tasks before the system starts running

27
New cards

static scheduler

a scheduler that makes a fixed schedule before tasks start

28
New cards

feasibility tests/schedulability tests

methods to check if tasks in a system can be completed on time

29
New cards

dynamic scheduler

a scheduler that adjusts the task schedule based on current conditions

30
New cards

preemptive scheduler

scheduler that can interrupt a currently running task to start or resume another task, ensuring that higher-priority tasks receive CPU time as needed

31
New cards

non-preemptive scheduler

a scheduler that allows a task to run to completion before switching to another task, ensuring that once a task starts, it is not interrupted until it finishes

32
New cards

multilevel queue

a scheduling algorithm dividing the ready queue into multiple distinct queues

33
New cards

foreground

refers to an interactive thread actively receiving input or engaging with user interaction

34
New cards

background

describes a thread or process not actively receiving user input or interaction, often running in batch mode or idle

35
New cards

multilevel feedback queue

a scheduling algorithm allowing processes to move between different priority queues based on their CPU characteristics 

36
New cards

symmetric multiprocessing (SMP)

each processor manages its own scheduling, handling both kernel and user threads with potential contention for system resources

37
New cards

asymmetric multiprocessing (AMP)

A system where one processor handles all system tasks and scheduling, while other processors execute only user code

38
New cards

chip multithreading (CMT)

CPUs with multiple cores, each supporting numerous hardware threads, that enhance overall processing efficiency

39
New cards

load balancing

distributing workload evenly across processors in an SMP system to maximize efficiency and prevent idle processors

40
New cards

push migration

load-balancing technique where a task redistributes threads from overloaded processors to those with lighter loads

41
New cards

pull migration

load-balancing method where an idle processor retrieves tasks from busy processors to balance the workload

42
New cards

processor affinity

keeping a thread on the same processor to benefit from its cache and reduce cache invalidation costs

43
New cards

soft affinity

strategy where the operating system aims to keep a thread on the same processor but allows it to migrate if necessary

44
New cards

hard affinity

A strategy where the operating system allows a thread to specify a set of processors on which it can run, ensuring it stays within this set

45
New cards

memory stall

a delay in thread execution when accessing memory that is not currently in the CPU cache, requiring retrieval from main memory

46
New cards

hardware threads

threads that a CPU core can manage, either one per core or multiple, to optimize performance by switching threads during stalls

47
New cards

What is the method used to determine which processes will be assigned to the CPU for execution?

CPU scheduling

48
New cards

How does a preemptive scheduling algorithm differ from a non-preemptive scheduling algorithm?

Preemptive scheduling can interrupt a running process to assign the CPU to another process.

49
New cards

Which criterion measures the total time taken from the submission of a process to its completion?

Turnaround time

50
New cards

Which scheduling criterion aims to keep the CPU as busy as possible?

Utilization

51
New cards

Which scheduling algorithm executes the process that arrives first?

First-come, first-served (FCFS)

52
New cards

How does shortest job first scheduling determine the order of process execution?

By the length of the next CPU burst of each process

53
New cards

Which scheduling algorithm allocates a fixed time slice to each process in the ready queue?

Round-robin

54
New cards

How is real-time scheduling different from traditional scheduling algorithms?

Real-time scheduling ensures processes meet deadlines.

55
New cards

Which type of scheduling uses multiple queues, each with its own scheduling algorithm?

Multilevel queue

56
New cards

What is the purpose of load balancing in multiprocessor scheduling?

To ensure equal distribution of tasks across processors