1/19
These flashcards cover key terms and concepts from Chapter 6 on CPU Scheduling in Operating Systems.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
CPU Scheduling
The process of determining which process in the ready queue will be allocated CPU time.
Throughput
The number of processes that complete their execution per time unit.
Turnaround Time
The total time taken from the submission of a process to the completion.
Waiting Time
The total time a process spends waiting in the ready queue.
Response Time
The time from when a request was submitted until the first response is produced.
Preemptive Scheduling
A scheduling strategy whereby a currently running process can be interrupted and moved to the ready state.
Non-preemptive Scheduling
A scheduling strategy where a running process cannot be interrupted until it finishes its current CPU burst.
First-Come, First-Served (FCFS)
A scheduling algorithm that serves processes in the order they arrive in the ready queue.
Shortest Job First (SJF)
A scheduling algorithm that selects processes based on the length of their next CPU burst.
Priority Scheduling
A scheduling method where each process is assigned a priority and the CPU is allocated to the highest priority process.
Round Robin (RR)
A preemptive scheduling algorithm that allocates a small unit of CPU time to each process in the ready queue.
Multilevel Queue Scheduling
A scheduling algorithm that partitions the ready queue into separate queues, each with its own scheduling algorithm.
Multilevel Feedback Queue
A scheduling algorithm that allows processes to move between different queues based on their execution history.
Thread Scheduling
A method to manage Threads in an operating system, with distinctions between user-level and kernel-level threads.
Dispatcher
A module that gives control of the CPU to the selected process by the short-term scheduler.
CPU-I/O Burst Cycle
The pattern of alternating periods of CPU execution and I/O wait in process execution.
Context Switch
The procedure of storing and restoring the state or context of a CPU so that multiple processes can share a single CPU resource.
Dispatch Latency
The time it takes for the dispatcher to stop one process and start another running.
Aging
A technique used in scheduling to prevent starvation by gradually increasing the priority of waiting processes.
Load Balancing
Distributing processes across multiple CPUs to ensure efficient CPU usage.