1/126
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
multiprogramming
Maximum CPU utilization obtained with ?
CPU–I/O Burst Cycle
Process execution consists of a cycle of CPU execution and I/O wait
I/O burst
CPU burst followed by
CPU burst
_________ distribution is of main concern
CPU scheduler
selects from among the processes in ready queue, and allocates a CPU core to one of them
1. Switches from running to waiting state
2. Switches from running to ready state
3. Switches from waiting to ready
4. Terminates
CPU scheduling decisions may take place when a process:
Switches from running to waiting state
Terminates
For situation _________ and ________, there is no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution.
Switches from running to ready state
Switches from waiting to ready
For situation _________ and ________, , there is a choice in terms of scheduling
non preemptive
Switches from running to waiting state
Terminates
preemptive
Switches from running to ready state
Switches from waiting to ready
Nonpreemptive scheduling
Under _______________, once the CPU has been allocated to a process, the process keeps the CPU until it releases it either by terminating or by switching to the waiting state.
Windows, MacOS, Linux, and UNIX
Virtually all modern operating systems including *4 kabouk* use preemptive scheduling algorithms.
Preemptive scheduling
can result in race conditions when data are shared among several processes.
preempted
While one process is updating the data, it is __________ so that the second process can run.
second process
The _________ then tries to read the data, which are in an inconsistent state
Dispatcher module
Switching context
Switching to user mode
Jumping to the proper location in the user program to restart that program
Dispatcher module involves:
Dispatch latency
time it takes for the dispatcher to stop one process and start another running
CPU utilization
Throughput
Turnaround time
Waiting time
Response time
Scheduling Criteria
CPU utilization
keep the CPU as busy as possible
Throughputamount of time to execute a particular process
# of processes that complete their execution per time unit
amount of time to execute a particular process
Waiting time
amount of time a process has been waiting in the ready queue
Response time
amount of time it takes from when a request was submitted until the first response is produced.
▪ Max CPU utilization
▪ Max throughput
▪ Min turnaround time
▪ Min waiting time
▪ Min response time
Scheduling Algorithm Optimization Criteria
P1 = 0
P2 = 24
P3 = 27
Waiting time
(0 + 24 + 27)/3 = 17
Average waiting time:
Convoy effect
short process behind long process
Convoy effect
Consider one CPU-bound and many I/O-bound processes
SJF is optimal
gives minimum average waiting time for a given set of processes
shortest-remaining-time-first
Preemptive version called
How do we determine the length of the next CPU burst?
• Could ask the user
• Estimate
exponential averaging
Determining Length of Next CPU Burst can be done by using the length of previous CPU bursts, using
SJN algorithm
Whenever a new process arrives in the ready queue, the decision on which process to schedule next is redone using the ?
Yes, because it is preemptive
Is SRT more “optimal” than SJN in terms of the minimum average waiting time for a given set of processes?
time quantum
Each process gets a small unit of CPU time
number (integer)
A priority __________ is associated with each process
smallest integer = highest priority
The CPU is allocated to the process with the highest priority
• Preemptive
• Nonpreemptive
Priority Scheduling has two types:
SJF
is priority scheduling where priority is the inverse of predicted next CPU burst time
Problem = Starvation
low priority processes may never execute
Solution = Aging
as time progresses increase the priority of the process
Round-Robin
Run the process with the highest priority. Processes with the same priority run _________
Multilevel queue
scheduler defined by the following parameters:
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine which queue a process will enter when that process needs service
• Scheduling among the queues
(I suggest e familiarize/memorize jud ni sila based on how sir makes the questions)
• Number of queues
• Scheduling algorithms for each queue
• Method used to determine which queue a process will enter when that process needs service
• Scheduling among the queues
Multilevel queue scheduler defined by the following parameters:
Multilevel Queue
▪ With priority scheduling, have separate queues for each priority.
▪ Schedule the process in the highest-priority queue
real-time processes (highest priority)
system processes
interactive processes
batch processes (lowest priority)
Prioritization based upon process type:
Multilevel Feedback Queue
A process can move between the various queues.
• Number of queues
• Scheduling algorithms for each queue
• determine when to upgrade a process
• determine when to demote a process
• determine which queue a process will enter when that process needs service
Multilevel-feedback-queue scheduler defined by the following parameters:
multilevel feedback queue
Aging can be implemented using
Thread Scheduling
▪ Distinction between user-level and kernel-level threads
▪ When threads supported, threads scheduled, not processes
process-contention scope (PCS)
Known as ______________ since scheduling competition is within the process
Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP
Typically done via priority set by programmer
system-contention scope (SCS)
Kernel thread scheduled onto available CPU is ___________ – competition among all threads in system
API
allows specifying either PCS or SCS during thread creation
PTHREAD_SCOPE_PROCESS
schedules threads using PCS scheduling
PTHREAD_SCOPE_SYSTEM
schedules threads using SCS scheduling
OS – Linux and macOS only
Can be limited by _______ allow PTHREAD_SCOPE_SYSTEM
True
True or False? CPU scheduling more complex when multiple CPUs are available
• Multicore CPUs
• Multithreaded cores
• NUMA systems
• Heterogeneous multiprocessing
Multiprocess (Multiple-Processor Scheduling) may be any one of the following architectures:
Symmetric multiprocessing (SMP)
is where each processor is self scheduling.
common ready queue (a)
All threads may be in a ?
private queue of threads (b)
Each processor may have its own ?
Multicore Processors
▪ Recent trend to place multiple processor cores on same physical chip
▪ Faster and consumes less power
▪ Multiple threads per core also growing
• Takes advantage of memory stall to make progress on another thread while memory retrieve happens
> 1
Each core has ____ hardware threads.
switch
If one thread has a memory stall, to another thread
Chip-multithreading (CMT)
assigns each core multiple hardware threads.
hyperthreading
Chip-multithreading (CMT) assigns each core multiple hardware threads. (Intel refers to this as _____________)
8 logical processors
On a quad-core system with 2 hardware threads per core, the operating system sees ?
software threads
hardware threads (logical processors)
Two levels of scheduling:
software threads
The operating system deciding which software thread to run on a logical CPU
hardware threads (logical processors)
How each core decides which hardware thread to run on the physical core
SMP
need to keep all CPUs loaded for efficiency
Load balancing
attempts to keep workload evenly distributed
Push migration
periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs
Pull migration
idle processors pulls waiting task from busy processor
Processor Affinity
When a thread has been running on one processor, the cache contents of that processor stores the memory accesses by that thread.
Processor Affinity
We refer to this as a thread having affinity for a processor
Load balancing
may affect processor affinity as a thread may be moved from one processor to another to balance loads, yet that thread loses the contents of what it had in the cache of the processor it was moved off of.
Soft affinity
the operating system attempts to keep a thread running on the same processor, but no guarantees.
Hard affinity
allows a process to specify a set of processors it may run on.
NUMA-aware
If the operating system is ________, it will assign memory closes to the CPU the thread is running on.
Real-Time CPU Scheduling
Can present obvious challenges
Soft real-time systems
Critical real-time tasks have the highest priority, but no guarantee as to when tasks will be scheduled
Hard real-time systems
task must be serviced by its deadline
Event latency
the amount of time that elapses from when an event occurs to when it is serviced.
Interrupt latency
Dispatch latency
Two types of latencies affect performance
Interrupt latency
time from arrival of interrupt to start of routine that services interrupt
Dispatch latency
time for schedule to take current process off CPU and switch to another
Dispatch Latency
▪ Conflict phase of ____________:
1. Preemption of any process running in kernel mode
2. Release by low priority process of resources needed by high priority processes
preemptive, priority-based scheduling
For real-time scheduling, scheduler must support
deadlines
For hard real-time must also provide ability to meet
periodic
Processes have new characteristics: _______ ones require CPU at constant intervals
Rate Monotonic Scheduling
A priority is assigned based on the inverse of its period
▪ Shorter periods = higher priority;
▪ Longer periods = lower priority
Earliest Deadline First Scheduling (EDF)
Priorities are assigned according to deadlines:
• The earlier the deadline, the higher the priority
• The later the deadline, the lower the priority
T shares
are allocated among all processes in the system
N < T
An application receives N shares where
Proportional Share Scheduling
This ensures each application will receive N / T of the total processor time
API
provides functions for managing real-time threads
SCHED_FIFO
SCHED_RR
Defines two scheduling classes for real-time threads: