Scheduling

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/94

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 7:44 PM on 4/29/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

95 Terms

1
New cards

Job (Scheduling)

A schedulable entity (thread) with pattern (CPU bursts and wait periods)

2
New cards

Workload

Set of jobs submitted over time

3
New cards

RQDS (Ready Queue Data Structure)

Data structure used by scheduler to manage runnable jobs

4
New cards

Preemptive scheduling

Scheduler can interrupt a running job mid-execution

5
New cards

Non-preemptive scheduling

Job runs until it blocks or finishes

6
New cards

Time quantum

Fixed CPU time slice given in preemptive scheduling

7
New cards

Average Turnaround Time (ATT)

(Completion time - Arrival time) averaged over all jobs

8
New cards

Average Response Time (ART)

(Time of first execution - Arrival time) averaged over all jobs

9
New cards

ATT vs ART tradeoff

Optimising one worsens the other; cannot optimise both simultaneously

10
New cards

System-critical jobs

Highest priority (e.g. interrupts, kernel threads)

11
New cards

Real-time jobs

Require strict timing guarantees (audio, video, games)

12
New cards

Interactive jobs

User-facing tasks (shells, GUIs)

13
New cards

I/O-bound jobs

Spend more time waiting than computing

14
New cards

CPU-bound jobs

Long computation tasks (rendering, simulations)

15
New cards

Background jobs

Lowest priority (logging, prefetching)

16
New cards

FCFS (First Come First Serve)

Schedules jobs in arrival order using FIFO queue

17
New cards

FCFS convoy effect

Short jobs wait behind long job (convoy leader)

18
New cards

FCFS performance

Poor ATT and ART with mixed workloads

19
New cards

SJF (Shortest Job First)

Schedules job with smallest CPU burst first

20
New cards

SJF optimality

Optimal ATT when all jobs arrive together

21
New cards

SJF limitation

Requires knowing burst times in advance

22
New cards

SJF convoy effect

Still occurs if long job arrives first

23
New cards

STCF (Shortest Time to Completion First)

Preemptive SJF; always run job with least remaining time

24
New cards

STCF optimality

Optimal ATT even with varying arrivals

25
New cards

STCF drawback

Poor ART and possible starvation of long jobs

26
New cards

Round Robin (RR)

Each job gets fixed quantum and cycles in circular queue

27
New cards

RR advantage

Good response time (ART)

28
New cards

RR disadvantage

Poor turnaround time (ATT)

29
New cards

RR quantum tradeoff

Small q = better ART but high overhead; large q ≈ FCFS

30
New cards

MLFQ rule 1

Higher priority job runs over lower priority job

31
New cards

MLFQ rule 2

Same priority jobs scheduled with Round Robin

32
New cards

MLFQ rule 3

New jobs enter highest priority queue

33
New cards

MLFQ rule 4

Use full quantum → demotion; finish early → stay

34
New cards

MLFQ rule 5

Periodic boost moves all jobs to top queue

35
New cards

MLFQ purpose

Approximate SJF without knowing burst times

36
New cards

MLFQ starvation prevention

Priority boost prevents permanent starvation

37
New cards

CFS (Completely Fair Scheduler)

Linux scheduler giving proportional CPU time

38
New cards

CFS data structure

Red-black tree sorted by vruntime

39
New cards

CFS selection rule

Run job with lowest vruntime

40
New cards

CFS complexity

O(log n) per operation

41
New cards

Vruntime

Virtual runtime tracking weighted CPU usage

42
New cards

CFS vruntime update

vruntime += actual_time × (default_weight / job_weight)

43
New cards

Nice values

Range from -20 (high priority) to +19 (low priority)

44
New cards

Default CFS weight

1024 (nice = 0)

45
New cards

CFS time slice

Sched_latency × (job weight / total weight)

46
New cards

Sched latency (typical)

48 ms

47
New cards

Minimum granularity

~6 ms

48
New cards

CFS fairness

Jobs that run less stay near front → no starvation

49
New cards

CFS waking jobs

Set vruntime to max(previous, current minimum)

50
New cards

Stride scheduling

Deterministic proportional share scheduler

51
New cards

Stride formula

stride = NUM_MAX / tickets

52
New cards

Pass value

Update pass += stride after each run

53
New cards

Stride selection

Run job with lowest pass value

54
New cards

Stride advantage

Precise proportional fairness

55
New cards

Stride disadvantage

New jobs can monopolise CPU

56
New cards

Stride issue

Changing priorities requires recalculating pass values

57
New cards

SQMS (Single Queue Multiprocessor Scheduling)

One shared queue for all CPUs

58
New cards

SQMS problem 1

Lock contention between CPUs

59
New cards

SQMS problem 2

Poor cache affinity (jobs move between CPUs)

60
New cards

MQMS (Multi-Queue Multiprocessor Scheduling)

Each CPU has its own queue

61
New cards

MQMS advantage

Better cache locality, no locking

62
New cards

MQMS problem

Load imbalance across CPUs

63
New cards

Linux CFS multiprocessor

Separate red-black tree per CPU + periodic load balancing

64
New cards

Cache affinity

Keeping jobs on same CPU improves performance

65
New cards

64 KB

Linux pipe buffer size

66
New cards

48 ms

Typical CFS scheduling latency

67
New cards

6 ms

CFS minimum granularity

68
New cards

1024

CFS default weight

69
New cards

-20 to +19

Nice value range

70
New cards

8 MB

Default Linux stack size

71
New cards

MLFQ vs CFS (core difference)

MLFQ infers behaviour; CFS uses explicit fairness via vruntime

72
New cards

MLFQ structure

Multiple priority queues

73
New cards

CFS structure

Single red-black tree

74
New cards

FCFS schedule order

Jobs execute strictly in arrival order

75
New cards

Convoy leader (FCFS)

Long job at front causing delays

76
New cards

RR scheduling behaviour

Job runs for q or until completion, then rotates

77
New cards

RR fairness

All jobs get CPU regularly

78
New cards

Priority-based RR (duplicate pointers)

More entries = higher effective priority

79
New cards

Duplicate pointer issue

Increased overhead and complexity

80
New cards

Weighted RR (w × q)

Jobs get multiple consecutive quanta

81
New cards

Weighted RR downside

Poor interactivity (long uninterrupted runs)

82
New cards

MLFQ priority for CPU-bound jobs

Lower priority (demoted frequently)

83
New cards

MLFQ priority for interactive jobs

Higher priority (short bursts avoid demotion)

84
New cards

Effect of adding CPUs

Does not change MLFQ priority behaviour, only parallelism

85
New cards

Boost period (T small)

Frequent boosts → more fairness

86
New cards

Boost period (T large)

Less fairness, more starvation risk

87
New cards

More levels (L larger)

Finer priority control

88
New cards

Lottery scheduling

Probabilistic scheduling using tickets

89
New cards

Lottery expected runs

n × p

90
New cards

Lottery variance

np(1-p)

91
New cards

Relative deviation

Decreases as number of lotteries increases

92
New cards

CFS selection complexity

O(log n) using red-black tree

93
New cards

CFS update operations

Tree insert/remove also O(log n)

94
New cards

Multiprocessor scheduling goal

Balance load while preserving cache locality

95
New cards

Load balancing

Periodic migration of tasks between CPUs