1/27
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
State Queues
All of the processes in the OS reside in the queues of each of the states (running only have one at a time)
Multiprogramming/Concurrency
One process on the CPU running and one or more doing I/O — increases system utilization and throughput when we overlap I/O and CPU activities (things are always queued up)
Long Term Scheduling
How does the OS determine the degree of multiprogramming (how many processes are running concurrently?); we’re worried about the number of jobs residing at once in the primary memory. At what point should we stop accepting processes?
Short-Term Scheduling
How does or SHOULD the OS select a process from the ready queue to execute?
Scheduler
Schedules the next process on the CPU; MECHANISM (the means) to implement a POLICY (what we want to happen). MAYBE executes when a process switches from running to blocked, a process is created or terminated, an interrupt occurs.
Non-preemptive
Scheduler runs when processes block or terminate — NOT on hardware interrupts; we aren’t stopping a process before it is stopping itself either by blocking or terminating
Preemptive
Scheduler runs on interrupts, mostly timer, but also system calls and other hardware device interrupts
Scheduler Criteria
CPU Utilization
Throughput
Turnaround Time
Response Time
Waiting Time
CPU Utilization
Percentage of time that the CPU is busy (lots of history in operating systems was made on maximizing this criterion)
Throughput
Number of processes completing in a unit of time. We invented batch systems to increase this because we could streamline the number of processes being completed in a period of time. This is also maximized in Shortest Job First.
Turnaround time
Length of time to run a process from initialization to termination, including all of the waiting time. We also focused on this with batch system because we were overlapping I/O to avoid waiting when nothing was running on the CPU.
Response Time
Time between issuing a command and getting a result. We start to care about this criterion with the introduction of interactive timesharing in Phase 2 of History of Operating Systems — we introduce multiple users and suddenly care about user experience. This is maximized in Round Robin.
Waiting Time
Total amount of time that a process is in the ready queue. We care about this with time sharing and our general criterion of fairness — we want to make sure that all processes are allotted time on the CPU eventually. Also about user experience.
First Come First Served (FCFS) or FIFO (First Come First Served)
Scheduler executes jobs in arrival order. Jobs run until they either complete or block on I/O. In early FCFS schedulers, the job did not relinquish the CPU even when it was doing I/O (busy waiting).
Round Robin
Add a timer and use a pre-emptive policy (stop something before it is done even when it’s not blocking itself)
Run each process for its time slice (scheduling quantum) — if a process gets to block or terminate before their time slice is up, we move onto the next process’ turn (they get the full quantum)
After each time slice, move the running process to the back of the queue
Selecting a time slice
Too large: waiting time suffers (processes are waiting too long on other processes before they can run on the CPU); degenerates to FCFS if processes are never preempted (all the tasks get done in order since they’re short enough to run through the entire time slice
Too small: throughput suffers because too much time is spent context switching
Balance the two by selecting a time slice where context switching is roughly 1% of the time slice (today between 10-100 milliseconds since context switches are 0 to 1 millisecond.)
Context Switch Def
Changing from one process to another. Main source of overhead for scheduling policy
Context Switch Process
Process is running
Process blocks or is interrupted
Mode switch to kernel
It’s time for the scheduler to run!
Scheduler saves executing process state (to its PCB — remember that it has those registers)
Scheduler chooses new process to run (based off of scheduling policy)
Scheduler loads its state (from its PCB)
Process is running
Shortest Job First Def
Schedule the job that has the least amount of work to do until its next I/O request or termination. I/O bound jobs get priority over CPU bound jobs since they have less work in between I/O.
Shortest Job First Disadvantages
We don’t know the future
Starvation of long processes if smaller jobs are always dynamically coming into the system.
Using the Past to Predict the Future
If a process is I/O bound in the past, likely to be I/O bound in the future —> favor jobs when they use little CPU time to run before those that do
Relies on past behavior and changes in behavior result in changes to scheduling decisions —> adaptive (a process that was not I/O bound but suddenly is can move up in priority)
Multilevel Feedback Queues
Multiple queues with different priorities.
Use Round Robin scheduling at each priority level, running jobs in the highest priority queue first.
Once those finish, the OS runs jobs out of the next priority queue, etc.
Round Robin time slices increase exponentially at lower priorities (giving jobs more chance to finish up or move up the queues)
MLFQs Adjusting Priorities
Job starts in the highest priority
If job’s time slice expires, drop its priority one level, but no further than the lowest level.
If a job’s time slice does not expire (the context switch comes from an I/O request instead), then increase its priority one level, up to the top level
MLFQ and Fairness?
MLFQ can lead to starvation if not careful — could still never reach the low priority jobs if there are constantly jobs coming in
One solution: give each queue a fraction of the CPU time (so each queue gets a chance) — only fair if there is an even distribution of jobs among the queue
Another solution: adjust the priority of jobs as they do not get serviced (can bump them all back up periodically) —> avoid starvation but average waiting time suffers when the system is overloaded because all the jobs end up with high priority at some point (need to sort them back out)
Where does the boot program come from?
CPU loads the boot program (either UEFI or BIOs) from ROM aka read only memory
Boot Program
Examines/checks the machine configuration (hardware of the machine; needs to know how many CPUs, how much memory, number and type of hardware devices)
Builds a configuration structure describing the hardware (data structure that holds details about the hardware)
Loads the OS kernel and gives it the configuration structure (so the OS knows what it is managing)
Operating System Initialization
Initialize kernel data structures (queues for each of the states for example)
Initialize the state of all hardware devices
Creates a number of process to start operation
Idle Loop
If there are no user programs available, runs the idle loops which performs some system management and profiling + halts the processor and enters in low-power mode (when our computer sleeps) — will wake up on interrupts from hardware devices.