This material is protected under the Commonwealth of Australia Copyright Regulation 1969.
Any unauthorized copying or communication of the material may be subject to copyright infringement.
References: Based on "Operating System Concepts" by Silberschatz, Galvin, and Gagne.
Topics:
Scheduling concepts
CPU scheduling algorithms
CPU scheduling evaluations
Aim to maximize CPU utilization in multiprogramming by executing processes from the ready queue whenever the CPU is available.
Process Execution: Consists of CPU execution followed by I/O wait.
Burst Cycles:
CPU-Bound Process: Long CPU bursts.
I/O-Bound Process: Short CPU bursts.
When switching the CPU:
States: Running to waiting (I/O request), running to ready (timer off), or process termination.
Short Term Scheduler (STS): Selects processes from memory that are ready to execute.
Must be quick due to frequent context switching.
Long Term Scheduler (LTS): Controls the degree of multiprogramming; invoked less frequently and selects between CPU-bound and I/O-bound jobs.
Medium Term Scheduler (MTS): Deals with swapping jobs in/out of memory to alleviate CPU contention.
Ready Queue Structure: Can be FIFO, priority-based, or other linked lists containing Process Control Blocks (PCBs).
Types:
Non-Preemptive: Once the CPU is allocated, it cannot be taken until the process releases it voluntarily (e.g., Windows 3.x).
Preemptive: CPU can be taken away by the scheduler if a higher priority process arrives (e.g., modern operating systems).
Functions:
Control CPU allocation to selected processes
Context switching
User mode switching
Metrics:
CPU Utilization: Higher is better
Throughput: Processes completed per time unit
Turnaround Time: Total time from submission to completion of a process (lower is better)
Waiting Time: Time a process waits in ready queue before execution (lower is better)
Response Time: Time from request submission to the first response (important for time-sharing environments).
Importance of minimizing response time variance in interactive systems for predictability and usability.
First Come First Served (FCFS): Simple FIFO approach but often leads to poor turnaround and waiting times (e.g., convoy effect).
Shortest Job First (SJF): Optimally minimizes waiting and turnaround times but may require predicting job burst lengths, which is challenging.
Non-preemptive SJF: Jobs cannot be interrupted once started.
Preemptive SJF (Shortest Remaining Time First - SRTF): New shorter jobs can interrupt longer ones.
Priority Scheduling: Allocates CPU based on priority (low number = high priority). Can lead to starvation; aging can mitigate this.
Round Robin (RR): Each process gets a small time quantum; good for time-sharing systems but can lead to high turnaround time with excessively small quanta.
Multilevel Queue Scheduling: Different scheduling algorithms for different queues (e.g., foreground vs. background).
Multilevel Feedback Queue Scheduling: Allows processes to move between queues to accommodate changing needs, countering starvation.
Guaranteed Scheduling: Allocates CPUs based on promises to users, ensuring fair use.
Real-time Scheduling: Ensures critical tasks are completed on time in systems requiring hard real-time guarantees.
Deterministic Model: Evaluates performance on a predetermined workload but may not generalize well.
Queuing Model: Uses Little's Law for performance comparisons but relies on accurate distributions.
Simulation: Models real systems; however, it can be resource-intensive.
Implementation Evaluation: Testing algorithms in real time requires significant effort but offers precise insights.
Pre-emptive and Fair Scheduling in Linux 2.2 and subsequent versions (e.g., O(1) and Completely Fair Scheduler (CFS)).
CFS optimizes CPU time allocation based on process priority and runtime, maintaining fairness while managing efficiency.
CFS introduced in Linux 2.6.23 adjusts CPU allocation dynamically based on the concept of virtual time.
Understanding CPU scheduling intricacies is vital for designing efficient operating systems that provide fair and fast service to users and processes, balancing complexity and performance effectively.