CPU Scheduling (Plat Tech)

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/66

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

67 Terms

1
New cards

multiprogramming

Maximum CPU utilization obtained with

2
New cards

CPU I/O Burst Cycle

Process execution consists of a cycle of CPU execution and I/O wait

3
New cards

CPU burst

followed by I/O burst

4
New cards

CPU burst distribution

is of main concern

5
New cards

CPU scheduler

selects from among the processes in ready queue, and allocates a CPU core to one of them

6
New cards

nonpreemptive

When scheduling takes place only under circumstances 1 and 4, the scheduling scheme is

1. Switches from running to waiting state

4. Terminates

7
New cards

preemptive

When scheduling takes place only under circumstances 2 and 3, the scheduling scheme is

2. Switches from running to ready state

3. Switches from waiting to ready

8
New cards

Windows, MacOS, Linux, and UNIX

use preemptive scheduling algorithms.

9
New cards

race conditions

Preemptive scheduling can result in ______ when data are shared among several processes.

10
New cards

Dispatcher module

gives control of the CPU to the process selected by the CPU scheduler

11
New cards

Dispatch latency

time it takes for the dispatcher to stop one process and start another running

12
New cards

CPU utilization

keep the CPU as busy as possible

13
New cards

Throughput

number of processes that complete their execution per time unit

14
New cards

Turnaround time

amount of time to execute a particular process

15
New cards

Waiting time

amount of time a process has been waiting in the ready queue

16
New cards

Response time

amount of time it takes from when a request was submitted until the first response is produced.

17
New cards

Max CPU utilization

Max throughput

Min turnaround time

Min waiting time

Min response time

Scheduling Algorithm Optimization Criteria

18
New cards

First- Come, First-Served (FCFS) Scheduling

automatically executes the queued processes and requests in the order of their arrival

19
New cards

Convoy effect

short process behind long process


is a phenomenon associated with the First Come First Serve (FCFS) algorithm, in which the whole Operating System slows down due to a few slow processes.

20
New cards

Shortest-Job-First (SJF) Scheduling

Associate with each process the length of its next CPU burst and Use these lengths to schedule the process with the shortest time

is a scheduling policy that selects the waiting process with the smallest execution time to execute next. SJN, also known as Shortest Job Next (SJN), can be preemptive or non-preemptive.  

21
New cards

Round Robin

Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.

22
New cards

time quantum

small unit of CPU time

23
New cards

FIFO

q large

24
New cards

RR

q small

25
New cards

Reponse

Typically RR has a higher average turnaround than SJF, but better ____.

26
New cards

Priority Scheduling

A priority number (integer) is associated with each process

27
New cards

highest priority

The CPU is allocated to the process with the _____

(smallest integer = highest priority)

28
New cards

inverse

SJF is priority scheduling where priority is the _____ of predicted next CPU burst time

29
New cards

Starvation

Problem = ______ – low priority processes may never execute.

30
New cards

Aging

Solution = ____ – as time progresses increase the priority of the process.

31
New cards

same priority

Run the process with the highest priority. Processes with the ______ run round-robin.

32
New cards

Multilevel Queue

The ready queue consists of multiple queues.

33
New cards

highest-priority

Multilevel Queue schedule the process in the ______ queue!

34
New cards

Multilevel Queue Prioritization based upon process type:

real-time processes

system processes

interactive processes

batch processes

35
New cards

Multilevel Feedback Queue

A process can move between the various queues.

36
New cards

Aging can be implemented using multilevel feedback queue.

37
New cards

Thread Scheduling

Distinction between user-level and kernel-level threads.

38
New cards

process-contention scope (pcs)

Many-to-one and many-to-many models, thread library schedules user-level threads to run on LWP. Since scheduling competition is within the process.

39
New cards

system-contention scope (SCS)

Kernel thread scheduled onto available CPU is _____ – competition among all threads in system.

40
New cards

Pthread Scheduling

API allows specifying either PCS or SCS during thread creation

41
New cards

Multiple-Processor Scheduling

CPU scheduling more complex when multiple CPUs are available

42
New cards

Symmetric multiprocessing (SMP)

is where each processor is self scheduling.

43
New cards

Multicore Processors

Recent trend to place multiple processor cores on same physical chipFaster and consumes less power

44
New cards

Multithreaded Multicore System

Each core has > 1 hardware threads.

If one thread has a memory stall, switch to another thread!

45
New cards

Chip-multithreading (CMT)

_____ assigns each core multiple hardware threads. (Intel refers to this as hyperthreading.)

46
New cards

8

On a quad-core system with 2 hardware threads per core, the operating system sees ____ logical processors.

47
New cards
  1. The operating system deciding which software thread to run on a logical CPU.

  1. How each core decides which hardware thread to run on the physical core.

Two levels of scheduling:

48
New cards

Load balancing

attempts to keep workload evenly distributed

49
New cards

Push migration

periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs

50
New cards

Pull migration

idle processors pulls waiting task from busy processor

51
New cards

Processor Affinity

When a thread has been running on one processor, the cache contents of that processor stores the memory accesses by that thread.

We refer to this as a thread having affinity for a processor

52
New cards

Soft affinity

the operating system attempts to keep a thread running on the same processor, but no guarantees.

53
New cards

Hard affinity

allows a process to specify a set of processors it may run on.

54
New cards

NUMA-aware

If the operating system is ______, it will assign memory close to the CPU the thread is running on.

55
New cards

Real-Time CPU Scheduling

Can present obvious challenges

56
New cards

Soft real-time systems

Critical real-time tasks have the highest priority, but no guarantee as to when tasks will be scheduled

57
New cards

Hard real-time systems

task must be serviced by its deadline.

58
New cards

Event latency

the amount of time that elapses from when an event occurs to when it is serviced.

59
New cards

Interrupt latency

time from arrival of interrupt to start of routine that services interrupt

60
New cards

Dispatch latency

time for schedule to take current process off CPU and switch to another

61
New cards

Priority-based Scheduling

For real-time scheduling, scheduler must support preemptive, prioritybased scheduling

62
New cards

Rate Monotonic Scheduling

A priority is assigned based on the inverse of its period.

63
New cards

Earliest Deadline First Scheduling (EDF)

Priorities are assigned according to deadlines: • The earlier the deadline, the higher the priority • The later the deadline, the lower the priority

64
New cards

Proportional Share Scheduling

T shares are allocated among all processes in the system

65
New cards

POSIX Real-Time Scheduling

The POSIX.1b standard

API provides functions for managing real-time threads

66
New cards

Linux Scheduling Through Version 2.5

Prior to kernel version 2.5, ran variation of standard UNIX scheduling algorithm

67
New cards

Completely Fair Scheduler (CFS)