Opsys_09_schedule_context
Operating Systems Lecture 7 - Scheduling and Context Switch
Concepts and Strategies for CPU Scheduling
Overview of CPU scheduling concepts, strategies, and time allocation algorithms across various systems including VAX/VMS, NT, and Unix.
Implementation aspects of Context Switch.
Resource Allocation in Operating Systems
Processes require various resources, including CPU time, which can be allocated and scheduled.
Allocation: Assigning resources to processes.
Scheduling: Timing the use of resources.
Dispatching: Switching between processes at the appropriate times.
Importance of discussing CPU scheduling separately due to the CPU's critical role as a resource.
The Problem of CPU Scheduling
Problem arises when there are more processes than available processors.
Processes must share limited CPU time.
Key states of processes include:
Running: When a process is actively using the CPU.
Blocked: When a process is unable to continue (e.g., waiting for I/O).
Upon expiry of a time slot or entering a blocked state, the scheduler selects one of the ready processes to run, followed by a CPU switch.
Levels of Scheduling
Long-term Scheduling (Job Scheduling): Determines which jobs from a pool will run and affects the level of multiprogramming.
Medium-term Scheduling: Involves the suspension of processes that are not ready to run (e.g., potentially swapped out due to memory constraints).
Short-term Scheduling: Determines which of the ready-to-run processes will be assigned CPU time.
Expectations from CPU Scheduling
Impartiality: All tasks should receive CPU time fairly, though not necessarily equally.
Efficiency: Maximal usage of CPU time.
Response Time: Acceptable response time for interactive users.
Turnaround Time: Should be acceptable for batch processing jobs.
Performance Metrics: Jobs processed per time unit should be optimized beyond just CPU utilization.
The Programmable Clock
Comprises three components:
Oscillator Crystal: Generates periodic signals at high frequencies.
Counter: Decrements its value with generated signals and triggers interrupts when reaching zero.
Holding Register: Loads into the counter to control how frequently interrupts are generated.
Example: A 1 MHz crystal generates a signal every 1 µs, programmable over a 16-bit range.
Tasks of the Clock Device IT Manager
Daily time maintenance, tracking CPU time used by processes, and managing time quantum allocations for scheduler calls.
Generates alarm signals upon request from processes and provides watchdog timers for I/O management and profiling.
CPU Time Registers and Schedulers
A 64-bit counter registers time in clock ticks, reflecting both the current time and boot time.
A process's CPU time register increments per tick if the process is running, facilitating CPU usage calculations for user and kernel modes.
Records time slices which decrement per clock tick, indicating the end of the quantum once the counter reaches zero.
Timer Management
Timer maintains a list of counters to signal expirations, allowing for task scheduling and managing process wait times.
Example list: Current Time: 4200, Next signals: S1 at 4203, S2 at 4207, S3 at 4213.
Technical Basics of Scheduling
The clock device generates periodic interrupts allowing for time slice evaluations and CPU switching.
Decision Strategies in Scheduling
Non-preemptive: Processes run to completion without interruption.
Preemptive: Scheduler can interrupt a running process to assign CPU to another.
Scheduling Decision Situations
Transition to a wait state (process waiting).
Preemption due to interrupting events.
State transition due to signals.
Process termination.
Factors Influencing Process Selection
Priority: Influenced by memory requirements, arrival times, CPU usage times, and whether the process is CPU or I/O intensive.
Estimation of expected CPU usage and real-time pressure on processes also contributes to scheduling decisions.
Concepts Related to Process Life and State Transitions
Processes can experience:
CPU burst periods (intensive computation sections).
I/O burst periods (waiting for I/O).
CPU Bound Processes: Have long computational bursts.
I/O Bound Processes: Short computational bursts with many I/O operations.
Scheduling Algorithms Overview
First Come, First Served (FCFS): Processes receive CPU in the order they arrive.
Simple but can cause convoy effects.
Shortest Job First (SJF): Prioritizes processes based on the shortest required execution time.
Aging algorithm may be used to avoid starvation of longer jobs.
Policy Driven Scheduling: Allocates CPU based on realistic promises of sharing.
Round Robin (RR): Assigns fixed time slices to processes in a circular queue.
Multilevel Feedback Queue Scheduling: Incorporates multiple queues for different priorities with feedback mechanisms.
Specific Scheduling Implementations
VAX/VMS Scheduling: Features include multiple priority levels and dynamic adjustments based on process states.
Unix Scheduling: Dynamic user-mode and kernel-mode priorities, aging mechanisms, and context management.
NT Scheduling: Supports multiple priority levels and an active standby state.
Linux Scheduling: Follows POSIX standards with multiple classes like SCHEDFIFO, SCHEDRR, and SCHED_OTHER.
Context Switch Implementation
Context Save: The hardware context of the running process is saved.
Program Counter (PC) and Dynamic Context: Important elements that need to be preserved across switches.
Dynamic Context Storage Options
Context can be saved in various ways, such as in a Process Control Block (PCB), stack, or kernel stack, with hardware support in some systems.
Mechanism for Context Management in Unix
Dynamic context is structured using system layers, wherein context is saved or restored on event triggers (e.g., system calls, interrupts).
Pseudocode for Context Switching
savecontext() & resumecontext() Functions: Algorithms that manage saving and restoring of the process state during context switches.
Summary of Lecture Insights
Covered fundamental aspects, strategies, and algorithms for CPU scheduling, including practical implementations in various operating systems.
Related Final Exam Items
Scheduled items for review include:
Time allocation and scheduling strategies.
Scheduling decision situations.
Factors affecting priority in scheduling.
Life stages of processes and algorithms such as FCFS, SJF, Policy Driven Scheduling, RR, and Multilevel Feedback Queue Scheduling.
Understanding of aging algorithms and examples within VAX, Unix, Linux, and NT contexts.
Conclusion
The lecture has explored essential concepts and implementations regarding CPU scheduling and context switching in detail.