1/94
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Job (Scheduling)
A schedulable entity (thread) with pattern (CPU bursts and wait periods)
Workload
Set of jobs submitted over time
RQDS (Ready Queue Data Structure)
Data structure used by scheduler to manage runnable jobs
Preemptive scheduling
Scheduler can interrupt a running job mid-execution
Non-preemptive scheduling
Job runs until it blocks or finishes
Time quantum
Fixed CPU time slice given in preemptive scheduling
Average Turnaround Time (ATT)
(Completion time - Arrival time) averaged over all jobs
Average Response Time (ART)
(Time of first execution - Arrival time) averaged over all jobs
ATT vs ART tradeoff
Optimising one worsens the other; cannot optimise both simultaneously
System-critical jobs
Highest priority (e.g. interrupts, kernel threads)
Real-time jobs
Require strict timing guarantees (audio, video, games)
Interactive jobs
User-facing tasks (shells, GUIs)
I/O-bound jobs
Spend more time waiting than computing
CPU-bound jobs
Long computation tasks (rendering, simulations)
Background jobs
Lowest priority (logging, prefetching)
FCFS (First Come First Serve)
Schedules jobs in arrival order using FIFO queue
FCFS convoy effect
Short jobs wait behind long job (convoy leader)
FCFS performance
Poor ATT and ART with mixed workloads
SJF (Shortest Job First)
Schedules job with smallest CPU burst first
SJF optimality
Optimal ATT when all jobs arrive together
SJF limitation
Requires knowing burst times in advance
SJF convoy effect
Still occurs if long job arrives first
STCF (Shortest Time to Completion First)
Preemptive SJF; always run job with least remaining time
STCF optimality
Optimal ATT even with varying arrivals
STCF drawback
Poor ART and possible starvation of long jobs
Round Robin (RR)
Each job gets fixed quantum and cycles in circular queue
RR advantage
Good response time (ART)
RR disadvantage
Poor turnaround time (ATT)
RR quantum tradeoff
Small q = better ART but high overhead; large q ≈ FCFS
MLFQ rule 1
Higher priority job runs over lower priority job
MLFQ rule 2
Same priority jobs scheduled with Round Robin
MLFQ rule 3
New jobs enter highest priority queue
MLFQ rule 4
Use full quantum → demotion; finish early → stay
MLFQ rule 5
Periodic boost moves all jobs to top queue
MLFQ purpose
Approximate SJF without knowing burst times
MLFQ starvation prevention
Priority boost prevents permanent starvation
CFS (Completely Fair Scheduler)
Linux scheduler giving proportional CPU time
CFS data structure
Red-black tree sorted by vruntime
CFS selection rule
Run job with lowest vruntime
CFS complexity
O(log n) per operation
Vruntime
Virtual runtime tracking weighted CPU usage
CFS vruntime update
vruntime += actual_time × (default_weight / job_weight)
Nice values
Range from -20 (high priority) to +19 (low priority)
Default CFS weight
1024 (nice = 0)
CFS time slice
Sched_latency × (job weight / total weight)
Sched latency (typical)
48 ms
Minimum granularity
~6 ms
CFS fairness
Jobs that run less stay near front → no starvation
CFS waking jobs
Set vruntime to max(previous, current minimum)
Stride scheduling
Deterministic proportional share scheduler
Stride formula
stride = NUM_MAX / tickets
Pass value
Update pass += stride after each run
Stride selection
Run job with lowest pass value
Stride advantage
Precise proportional fairness
Stride disadvantage
New jobs can monopolise CPU
Stride issue
Changing priorities requires recalculating pass values
SQMS (Single Queue Multiprocessor Scheduling)
One shared queue for all CPUs
SQMS problem 1
Lock contention between CPUs
SQMS problem 2
Poor cache affinity (jobs move between CPUs)
MQMS (Multi-Queue Multiprocessor Scheduling)
Each CPU has its own queue
MQMS advantage
Better cache locality, no locking
MQMS problem
Load imbalance across CPUs
Linux CFS multiprocessor
Separate red-black tree per CPU + periodic load balancing
Cache affinity
Keeping jobs on same CPU improves performance
64 KB
Linux pipe buffer size
48 ms
Typical CFS scheduling latency
6 ms
CFS minimum granularity
1024
CFS default weight
-20 to +19
Nice value range
8 MB
Default Linux stack size
MLFQ vs CFS (core difference)
MLFQ infers behaviour; CFS uses explicit fairness via vruntime
MLFQ structure
Multiple priority queues
CFS structure
Single red-black tree
FCFS schedule order
Jobs execute strictly in arrival order
Convoy leader (FCFS)
Long job at front causing delays
RR scheduling behaviour
Job runs for q or until completion, then rotates
RR fairness
All jobs get CPU regularly
Priority-based RR (duplicate pointers)
More entries = higher effective priority
Duplicate pointer issue
Increased overhead and complexity
Weighted RR (w × q)
Jobs get multiple consecutive quanta
Weighted RR downside
Poor interactivity (long uninterrupted runs)
MLFQ priority for CPU-bound jobs
Lower priority (demoted frequently)
MLFQ priority for interactive jobs
Higher priority (short bursts avoid demotion)
Effect of adding CPUs
Does not change MLFQ priority behaviour, only parallelism
Boost period (T small)
Frequent boosts → more fairness
Boost period (T large)
Less fairness, more starvation risk
More levels (L larger)
Finer priority control
Lottery scheduling
Probabilistic scheduling using tickets
Lottery expected runs
n × p
Lottery variance
np(1-p)
Relative deviation
Decreases as number of lotteries increases
CFS selection complexity
O(log n) using red-black tree
CFS update operations
Tree insert/remove also O(log n)
Multiprocessor scheduling goal
Balance load while preserving cache locality
Load balancing
Periodic migration of tasks between CPUs