CPU Scheduling Concepts

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/34

flashcard set

Earn XP

Description and Tags

Flashcards covering key vocabulary terms and concepts related to CPU scheduling from the Operating System Concepts lecture.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

35 Terms

1
New cards

CPU Utilization

The percentage of time the CPU is actively working.

2
New cards

Throughput

The number of processes that complete their execution per time unit.

3
New cards

Turnaround Time

The total time taken to execute a particular process.

4
New cards

Waiting Time

The time a process has been in the ready queue waiting to execute.

5
New cards

Response Time

The time it takes from when a request was submitted until the first response is produced.

6
New cards

CPU–I/O Burst Cycle

A pattern of execution that consists of alternating periods of CPU execution and I/O wait.

7
New cards

Preemptive Scheduling

A scheduling scheme where a process can be interrupted and moved to the ready state.

8
New cards

Nonpreemptive Scheduling

A scheduling policy where a process keeps the CPU until it voluntarily releases it.

9
New cards

FCFS Scheduling

First-Come, First-Served; a scheduling algorithm where the first process in the ready queue is the first to execute.

10
New cards

SJF Scheduling

Shortest Job First; schedules the next process with the shortest next CPU burst.

11
New cards

Round Robin Scheduling

Each process gets a fixed time slice (quantum) to execute, after which it is returned to the ready queue.

12
New cards

Priority Scheduling

Assigns priority to processes and allocates the CPU to the process with the highest priority.

13
New cards

Aging

A technique to prevent starvation by gradually increasing the priority of waiting processes.

14
New cards

Virtual CPU

Illusion created by the operating system to allow multiple processes to appear to run simultaneously.

15
New cards

Dispatcher

The module that gives control of the CPU to the chosen process, switching context as necessary.

16
New cards

Preemptive version of SJF

Shortest Remaining Time First; preempts the running process if a new process arrives with a shorter CPU burst.

17
New cards

Gantt Chart

A visual representation of the scheduling of processes over time.

18
New cards

Quantum

The fixed time period allocated to each process in Round Robin scheduling.

19
New cards

Dispatch Latency

The time it takes for the dispatcher to stop one process and start another.

20
New cards

Soft Real-Time Systems

Systems where critical tasks have priority but there is no guarantee of execution time.

21
New cards

Hard Real-Time Systems

Systems that require tasks to be completed by strict deadlines.

22
New cards

Multilevel Queue Scheduling

A scheduling scheme that manages multiple queues, each with its own scheduling algorithm.

23
New cards

Multilevel Feedback Queue

A scheduling scheme that allows processes to move between multiple queues based on their behavior.

24
New cards

Processor Affinity

Preference for a process to run on the same processor due to better cache performance.

25
New cards

Symmetric Multiprocessing (SMP)

Performance architecture where each processor self-schedules tasks.

26
New cards

Load Balancing

The process of distributing workloads evenly across multiple processors.

27
New cards

Interrupt Latency

The time from the arrival of an interrupt to the start of servicing the interrupt request.

28
New cards

Little's Law

A formula that relates average queue length, arrival rate and waiting time; n = λ x W.

29
New cards

Deterministic Modeling

An evaluation method that uses predetermined workloads to analyze performance of scheduling algorithms.

30
New cards

CPU burst duration

The amount of time a process is engaged in CPU activity before performing I/O.

31
New cards

Critical Section Problem

A situation in concurrent programming where two or more processes access shared resources concurrently.

32
New cards

Timer Interrupt

A signal that interrupts the CPU to allow scheduling of the next process.

33
New cards

Convoy Effect

Occurs when a short process is waiting for a long process, delaying its execution.

34
New cards

Exponential Averaging

A method for predicting CPU bursts that weights recent history more heavily.

35
New cards

Starvation

A situation where a process never gets the resources it needs because of indefinite postponement.