Part 1: CPU scheduling

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/126

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

127 Terms

1
New cards

multiprogramming

Maximum CPU utilization obtained with ?

2
New cards

CPU–I/O Burst Cycle

Process execution consists of a cycle of CPU execution and I/O wait

3
New cards

I/O burst

CPU burst followed by

4
New cards

CPU burst

_________ distribution is of main concern

5
New cards

CPU scheduler

selects from among the processes in ready queue, and allocates a CPU core to one of them

6
New cards

1. Switches from running to waiting state

2. Switches from running to ready state

3. Switches from waiting to ready

4. Terminates

CPU scheduling decisions may take place when a process:

7
New cards
  • Switches from running to waiting state

  • Terminates

For situation _________ and ________, there is no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution.

8
New cards
  • Switches from running to ready state

  • Switches from waiting to ready

For situation _________ and ________, , there is a choice in terms of scheduling

9
New cards

non preemptive

  • Switches from running to waiting state

  • Terminates

10
New cards

preemptive

  • Switches from running to ready state

  • Switches from waiting to ready

11
New cards

Nonpreemptive scheduling

Under _______________, once the CPU has been allocated to a process, the process keeps the CPU until it releases it either by terminating or by switching to the waiting state.

12
New cards

Windows, MacOS, Linux, and UNIX

Virtually all modern operating systems including *4 kabouk* use preemptive scheduling algorithms.

13
New cards

Preemptive scheduling

can result in race conditions when data are shared among several processes.

14
New cards

preempted

While one process is updating the data, it is __________ so that the second process can run.

15
New cards

second process

The _________ then tries to read the data, which are in an inconsistent state

16
New cards

Dispatcher module

17
New cards

Switching context

Switching to user mode

Jumping to the proper location in the user program to restart that program

Dispatcher module involves:

18
New cards

Dispatch latency

time it takes for the dispatcher to stop one process and start another running

19
New cards
  • CPU utilization

  • Throughput

  • Turnaround time

  • Waiting time

  • Response time

Scheduling Criteria

20
New cards

CPU utilization

keep the CPU as busy as possible

21
New cards

Throughputamount of time to execute a particular process

# of processes that complete their execution per time unit

22
New cards

amount of time to execute a particular process

23
New cards

Waiting time

amount of time a process has been waiting in the ready queue

24
New cards

Response time

amount of time it takes from when a request was submitted until the first response is produced.

25
New cards

Max CPU utilization

Max throughput

Min turnaround time

Min waiting time

Min response time

Scheduling Algorithm Optimization Criteria

26
New cards

P1 = 0

P2 = 24

P3 = 27

Waiting time

27
New cards

(0 + 24 + 27)/3 = 17

Average waiting time:

28
New cards

Convoy effect

short process behind long process

29
New cards

Convoy effect

Consider one CPU-bound and many I/O-bound processes

30
New cards

SJF is optimal

gives minimum average waiting time for a given set of processes

31
New cards

shortest-remaining-time-first

Preemptive version called

32
New cards

How do we determine the length of the next CPU burst?

• Could ask the user

• Estimate

33
New cards

exponential averaging

Determining Length of Next CPU Burst can be done by using the length of previous CPU bursts, using

34
New cards

SJN algorithm

Whenever a new process arrives in the ready queue, the decision on which process to schedule next is redone using the ?

35
New cards

Yes, because it is preemptive

Is SRT more “optimal” than SJN in terms of the minimum average waiting time for a given set of processes?

36
New cards

time quantum

Each process gets a small unit of CPU time

37
New cards

number (integer)

A priority __________ is associated with each process

38
New cards

smallest integer = highest priority

The CPU is allocated to the process with the highest priority

39
New cards

• Preemptive

• Nonpreemptive

Priority Scheduling has two types:

40
New cards

SJF

is priority scheduling where priority is the inverse of predicted next CPU burst time

41
New cards

Problem = Starvation

low priority processes may never execute

42
New cards

Solution = Aging

as time progresses increase the priority of the process

43
New cards

Round-Robin

Run the process with the highest priority. Processes with the same priority run _________

44
New cards

Multilevel queue

scheduler defined by the following parameters:

• Number of queues

• Scheduling algorithms for each queue

• Method used to determine which queue a process will enter when that process needs service

• Scheduling among the queues

(I suggest e familiarize/memorize jud ni sila based on how sir makes the questions)

45
New cards

• Number of queues

• Scheduling algorithms for each queue

• Method used to determine which queue a process will enter when that process needs service

• Scheduling among the queues

Multilevel queue scheduler defined by the following parameters:

46
New cards

Multilevel Queue

With priority scheduling, have separate queues for each priority.

Schedule the process in the highest-priority queue

47
New cards
  • real-time processes (highest priority)

  • system processes

  • interactive processes

  • batch processes (lowest priority)

Prioritization based upon process type:

48
New cards

Multilevel Feedback Queue

A process can move between the various queues.

49
New cards

• Number of queues

• Scheduling algorithms for each queue

• determine when to upgrade a process

• determine when to demote a process

• determine which queue a process will enter when that process needs service

Multilevel-feedback-queue scheduler defined by the following parameters:

50
New cards

multilevel feedback queue

Aging can be implemented using

51
New cards

Thread Scheduling

Distinction between user-level and kernel-level threads

When threads supported, threads scheduled, not processes

52
New cards

process-contention scope (PCS)

  • Known as ______________ since scheduling competition is within the process

  • Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP

  • Typically done via priority set by programmer

53
New cards

system-contention scope (SCS)

Kernel thread scheduled onto available CPU is ___________ – competition among all threads in system

54
New cards

API

allows specifying either PCS or SCS during thread creation

55
New cards

PTHREAD_SCOPE_PROCESS

schedules threads using PCS scheduling

56
New cards

PTHREAD_SCOPE_SYSTEM

schedules threads using SCS scheduling

57
New cards

OS – Linux and macOS only

Can be limited by _______ allow PTHREAD_SCOPE_SYSTEM

58
New cards

True

True or False? CPU scheduling more complex when multiple CPUs are available

59
New cards

• Multicore CPUs

• Multithreaded cores

• NUMA systems

• Heterogeneous multiprocessing

Multiprocess (Multiple-Processor Scheduling) may be any one of the following architectures:

60
New cards

Symmetric multiprocessing (SMP)

is where each processor is self scheduling.

61
New cards

common ready queue (a)

All threads may be in a ?

62
New cards

private queue of threads (b)

Each processor may have its own ?

63
New cards

Multicore Processors

Recent trend to place multiple processor cores on same physical chip

Faster and consumes less power

Multiple threads per core also growing

• Takes advantage of memory stall to make progress on another thread while memory retrieve happens

64
New cards

> 1

Each core has ____ hardware threads.

65
New cards

switch

If one thread has a memory stall, to another thread

66
New cards

Chip-multithreading (CMT)

assigns each core multiple hardware threads.

67
New cards

hyperthreading

Chip-multithreading (CMT) assigns each core multiple hardware threads. (Intel refers to this as _____________)

68
New cards

8 logical processors

On a quad-core system with 2 hardware threads per core, the operating system sees ?

69
New cards
  • software threads

  • hardware threads (logical processors)

Two levels of scheduling:

70
New cards

software threads

The operating system deciding which software thread to run on a logical CPU

71
New cards

hardware threads (logical processors)

How each core decides which hardware thread to run on the physical core

72
New cards

SMP

need to keep all CPUs loaded for efficiency

73
New cards

Load balancing

attempts to keep workload evenly distributed

74
New cards

Push migration

periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs

75
New cards

Pull migration

idle processors pulls waiting task from busy processor

76
New cards

Processor Affinity

When a thread has been running on one processor, the cache contents of that processor stores the memory accesses by that thread.

77
New cards

Processor Affinity

We refer to this as a thread having affinity for a processor

78
New cards

Load balancing

may affect processor affinity as a thread may be moved from one processor to another to balance loads, yet that thread loses the contents of what it had in the cache of the processor it was moved off of.

79
New cards

Soft affinity

the operating system attempts to keep a thread running on the same processor, but no guarantees.

80
New cards

Hard affinity

allows a process to specify a set of processors it may run on.

81
New cards

NUMA-aware

If the operating system is ________, it will assign memory closes to the CPU the thread is running on.

82
New cards

Real-Time CPU Scheduling

Can present obvious challenges

83
New cards

Soft real-time systems

Critical real-time tasks have the highest priority, but no guarantee as to when tasks will be scheduled

84
New cards

Hard real-time systems

task must be serviced by its deadline

85
New cards

Event latency

the amount of time that elapses from when an event occurs to when it is serviced.

86
New cards
87
New cards
  • Interrupt latency

  • Dispatch latency

Two types of latencies affect performance

88
New cards

Interrupt latency

time from arrival of interrupt to start of routine that services interrupt

89
New cards

Dispatch latency

time for schedule to take current process off CPU and switch to another

90
New cards

Dispatch Latency

Conflict phase of ____________:

1. Preemption of any process running in kernel mode

2. Release by low priority process of resources needed by high priority processes

91
New cards

preemptive, priority-based scheduling

For real-time scheduling, scheduler must support

92
New cards

deadlines

For hard real-time must also provide ability to meet

93
New cards

periodic

Processes have new characteristics: _______ ones require CPU at constant intervals

94
New cards

Rate Monotonic Scheduling

A priority is assigned based on the inverse of its period

Shorter periods = higher priority;

Longer periods = lower priority

95
New cards

Earliest Deadline First Scheduling (EDF)

Priorities are assigned according to deadlines:

• The earlier the deadline, the higher the priority

• The later the deadline, the lower the priority

96
New cards

T shares

are allocated among all processes in the system

97
New cards

N < T

An application receives N shares where

98
New cards

Proportional Share Scheduling

This ensures each application will receive N / T of the total processor time

99
New cards

API

provides functions for managing real-time threads

100
New cards
  • SCHED_FIFO

  • SCHED_RR

Defines two scheduling classes for real-time threads: