CPU Scheduling - CS 439

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/27

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

28 Terms

1
New cards

State Queues

All of the processes in the OS reside in the queues of each of the states (running only have one at a time)

2
New cards

Multiprogramming/Concurrency

One process on the CPU running and one or more doing I/O — increases system utilization and throughput when we overlap I/O and CPU activities (things are always queued up)

3
New cards

Long Term Scheduling

How does the OS determine the degree of multiprogramming (how many processes are running concurrently?); we’re worried about the number of jobs residing at once in the primary memory. At what point should we stop accepting processes?

4
New cards

Short-Term Scheduling

How does or SHOULD the OS select a process from the ready queue to execute?

5
New cards

Scheduler

Schedules the next process on the CPU; MECHANISM (the means) to implement a POLICY (what we want to happen). MAYBE executes when a process switches from running to blocked, a process is created or terminated, an interrupt occurs.

6
New cards

Non-preemptive

Scheduler runs when processes block or terminate — NOT on hardware interrupts; we aren’t stopping a process before it is stopping itself either by blocking or terminating

7
New cards

Preemptive

Scheduler runs on interrupts, mostly timer, but also system calls and other hardware device interrupts

8
New cards

Scheduler Criteria

  1. CPU Utilization

  2. Throughput

  3. Turnaround Time

  4. Response Time

  5. Waiting Time

9
New cards

CPU Utilization

Percentage of time that the CPU is busy (lots of history in operating systems was made on maximizing this criterion)

10
New cards

Throughput

Number of processes completing in a unit of time. We invented batch systems to increase this because we could streamline the number of processes being completed in a period of time. This is also maximized in Shortest Job First.

11
New cards

Turnaround time

Length of time to run a process from initialization to termination, including all of the waiting time. We also focused on this with batch system because we were overlapping I/O to avoid waiting when nothing was running on the CPU.

12
New cards

Response Time

Time between issuing a command and getting a result. We start to care about this criterion with the introduction of interactive timesharing in Phase 2 of History of Operating Systems — we introduce multiple users and suddenly care about user experience. This is maximized in Round Robin.

13
New cards

Waiting Time

Total amount of time that a process is in the ready queue. We care about this with time sharing and our general criterion of fairness — we want to make sure that all processes are allotted time on the CPU eventually. Also about user experience.

14
New cards

First Come First Served (FCFS) or FIFO (First Come First Served)

Scheduler executes jobs in arrival order. Jobs run until they either complete or block on I/O. In early FCFS schedulers, the job did not relinquish the CPU even when it was doing I/O (busy waiting).

15
New cards

Round Robin

  • Add a timer and use a pre-emptive policy (stop something before it is done even when it’s not blocking itself)

  • Run each process for its time slice (scheduling quantum) — if a process gets to block or terminate before their time slice is up, we move onto the next process’ turn (they get the full quantum)

  • After each time slice, move the running process to the back of the queue

16
New cards

Selecting a time slice

  • Too large: waiting time suffers (processes are waiting too long on other processes before they can run on the CPU); degenerates to FCFS if processes are never preempted (all the tasks get done in order since they’re short enough to run through the entire time slice

  • Too small: throughput suffers because too much time is spent context switching

  • Balance the two by selecting a time slice where context switching is roughly 1% of the time slice (today between 10-100 milliseconds since context switches are 0 to 1 millisecond.)

17
New cards

Context Switch Def

Changing from one process to another. Main source of overhead for scheduling policy

18
New cards

Context Switch Process

  1. Process is running

  2. Process blocks or is interrupted

  3. Mode switch to kernel

  4. It’s time for the scheduler to run!

  5. Scheduler saves executing process state (to its PCB — remember that it has those registers)

  6. Scheduler chooses new process to run (based off of scheduling policy)

  7. Scheduler loads its state (from its PCB)

  8. Process is running

19
New cards

Shortest Job First Def

Schedule the job that has the least amount of work to do until its next I/O request or termination. I/O bound jobs get priority over CPU bound jobs since they have less work in between I/O.

20
New cards

Shortest Job First Disadvantages

  • We don’t know the future

  • Starvation of long processes if smaller jobs are always dynamically coming into the system.

21
New cards

Using the Past to Predict the Future

  • If a process is I/O bound in the past, likely to be I/O bound in the future —> favor jobs when they use little CPU time to run before those that do

  • Relies on past behavior and changes in behavior result in changes to scheduling decisions —> adaptive (a process that was not I/O bound but suddenly is can move up in priority)

22
New cards

Multilevel Feedback Queues

  • Multiple queues with different priorities.

  • Use Round Robin scheduling at each priority level, running jobs in the highest priority queue first.

  • Once those finish, the OS runs jobs out of the next priority queue, etc.

  • Round Robin time slices increase exponentially at lower priorities (giving jobs more chance to finish up or move up the queues)

23
New cards

MLFQs Adjusting Priorities

  1. Job starts in the highest priority

  2. If job’s time slice expires, drop its priority one level, but no further than the lowest level.

  3. If a job’s time slice does not expire (the context switch comes from an I/O request instead), then increase its priority one level, up to the top level

24
New cards

MLFQ and Fairness?

  • MLFQ can lead to starvation if not careful — could still never reach the low priority jobs if there are constantly jobs coming in

  • One solution: give each queue a fraction of the CPU time (so each queue gets a chance) — only fair if there is an even distribution of jobs among the queue

  • Another solution: adjust the priority of jobs as they do not get serviced (can bump them all back up periodically) —> avoid starvation but average waiting time suffers when the system is overloaded because all the jobs end up with high priority at some point (need to sort them back out)

25
New cards

Where does the boot program come from?

CPU loads the boot program (either UEFI or BIOs) from ROM aka read only memory

26
New cards

Boot Program

  1. Examines/checks the machine configuration (hardware of the machine; needs to know how many CPUs, how much memory, number and type of hardware devices)

  2. Builds a configuration structure describing the hardware (data structure that holds details about the hardware)

  3. Loads the OS kernel and gives it the configuration structure (so the OS knows what it is managing)

27
New cards

Operating System Initialization

  1. Initialize kernel data structures (queues for each of the states for example)

  2. Initialize the state of all hardware devices

  3. Creates a number of process to start operation

28
New cards

Idle Loop

If there are no user programs available, runs the idle loops which performs some system management and profiling + halts the processor and enters in low-power mode (when our computer sleeps) — will wake up on interrupts from hardware devices.