LQ2 - Operating Systems - CPU Scheduling

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Get a hint
Hint

CPU–I/O Burst Cycle

Get a hint
Hint

consists of a cycle of CPU execution and I/O wait

Get a hint
Hint

false

short bursts are more frequent than long bursts

Get a hint
Hint

True or False: there are a small number of short bursts, while there are a large number of long bursts

Card Sorting

1/55

Anonymous user
Anonymous user
flashcard set

Earn XP

Description and Tags

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

56 Terms

1
New cards

CPU–I/O Burst Cycle

consists of a cycle of CPU execution and I/O wait

2
New cards

false

short bursts are more frequent than long bursts

True or False: there are a small number of short bursts, while there are a large number of long bursts

3
New cards

CPU scheduler

selects from among the processes in ready queue, and allocates a CPU core to one of them

4
New cards

nonpreemptive

If scheduling takes place only under the following circumstances: (1) switches from running to waiting state, and (2) terminates; what scheduling scheme does it use?

5
New cards

Dispatcher

this module gives control of the CPU to the process selected by the CPU scheduler. This involves switching context, switching to user mode, and jumping to the proper location in the user program to restart that program.

6
New cards

Dispatch latency

time it takes for the dispatcher to stop one process and start another running

7
New cards

Throughput

Number of processes that complete their execution per time unit

8
New cards

Turnaround time

Amount of time to execute a particular process

9
New cards

Waiting time

Amount of time a process has been waiting in the ready queue

10
New cards

Response time

Amount of time it takes from when a request was submitted until the first response is produced.

11
New cards

Shortest-Job First (SJF) Scheduling

Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time.

12
New cards

Shortest-Job First (SJF) Scheduling

is optimal—gives minimum average waiting time for a given set of processes

13
New cards

shortest-remaining-time-first

Preemptive version of SJF

14
New cards

Round Robin (RR)

Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.

15
New cards

Priority Scheduling

  • A priority number (integer) is associated with each process

  • The CPU is allocated to the process with the highest priority (smallest integer highest priority)

16
New cards

Starvation

a problem with priority scheduling where low priority processes may never execute

17
New cards

Aging

as time progresses, increase the priority of the processes

18
New cards

Priority Scheduling with Round-Robin

Run the process with the highest priority. Processes with the same priority run round-robin.

19
New cards

Multilevel Queue

With priority scheduling, have separate queues for each priority.

20
New cards

Multilevel Feedback Queue

A process can move between the various queues.

21
New cards

Symmetric multiprocessing (SMP)

is where each processor is self scheduling

22
New cards

Multiple-Processor Scheduling

CPU scheduling with multiple CPUs available

23
New cards

Multicore Processors

  • Recent trend to place multiple processor cores on same physical chip

  • Faster and consumes less power

  • Multiple threads per core also growing

    • Takes advantage of memory stall to make progress on another thread while memory retrieve happens

24
New cards

Multithreaded Multicore System

  • Each core has >1 hardware threads

  • If one thread has a memory stall, switch to another thread

25
New cards

Chip-multithreading (CMT)

assigns each core multiple hardware threads. (Intel refers to this as hyperthreading.)

26
New cards

load scheduling

If SMP needs to keep all CPUs loaded for efficiency, ____ ____ attempts to keep workload evenly distributed.

27
New cards

Push migration

periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs

28
New cards

Pull migration

idle processors pulls waiting task from busy processor

29
New cards

Soft affinity

the operating system attempts to keep a thread running on the same processor, but no guarantees.

30
New cards

Hard affinity

allows a process to specify a set of processors it may run on.

31
New cards

Multiple-Processor Scheduling - Processor Affinity

When a thread has been running on one processor, the cache contents of that processor stores the memory accessed by that thread.

32
New cards

NUMA-aware

If the operating system is _______, it will assign memory closest to the CPU the thread is running on.

33
New cards

Soft real-time systems

A real time system where critical real-time tasks have the highest priority, but no guarantee as to when tasks will be scheduled.

34
New cards

Hard real-time systems

A real time system where task must be serviced by its deadline.

35
New cards

runqueue

For the Linux scheduling version 2.5, all run-able tasks tracked in per-CPU ____ data structure

36
New cards

Two scheduling classes in Linux scheduling version 2.6.23+

  • default

  • real-time

37
New cards

virtual run time, vruntime

CFS scheduler maintains per task ____ ____ ____ in variable _____

38
New cards

The Linux Completely Fair Scheduler (CFS)

provides an efficient algorithm for selecting which task to run next. Each runnable task is placed in a red-black tree

39
New cards

red-black tree

a balanced binary search tree whose key is based on the value of vruntime.

40
New cards

Scheduling domain

is a set of CPU cores that can be balanced against one another.

41
New cards

Idle thread

In Windows scheduling, If no run-able thread, runs ____ ___

42
New cards

User-mode scheduling (UMS)

Windows 7 added ____ ____:

  • Applications create and manage threads independent of kernel

  • For large number of threads, much more efficient

  • They come from programming language libraries like C++ Concurrent Runtime (ConcRT) framework

43
New cards

preemptive, nonpreemptive

Scheduling algorithms may be either ___ (where the CPU can be taken away from a process) or ___ (where a process must voluntarily relinquish control of the CPU). Almost all modern operating systems are preemptive.

44
New cards

CPU scheduling

is the task of selecting a waiting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher.

45
New cards

First-come, first-served (FCFS) scheduling

is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.

46
New cards

Shortest-job-first (SJF)

scheduling is demonstrably optimal, providing the shortest average waiting time. Implementing this algorithm is difficult because it is hard to predict the length of the next CPU burst.

47
New cards

Round-robin (RR) scheduling

allocates the CPU to each process for a time quantum. If the process does not relinquish the CPU before its time quantum expires, the process is preempted, and another process is scheduled to run for a time quantum.

48
New cards

Priority scheduling

assigns each process a priority, and the CPU is allocated to the process with the highest priority. Processes with the same priority can be scheduled in FCFS order or using RR scheduling.

49
New cards

Multilevel queue scheduling

partitions processes into several separate queues arranged by priority, and the scheduler executes the processes in the highest-priority queue. Different scheduling algorithms may be used in each queue.

50
New cards

Multilevel feedback queues

are similar to multilevel queues, except that a process may migrate between different queues.

51
New cards

Multicore processors

place one or more CPUs on the same physical chip, and each CPU may have more than one hardware thread. From the perspective of the operating system, each hardware thread appears to be a logical CPU.

52
New cards

Load balancing

on multicore systems equalizes loads between CPU cores, although migrating threads between cores to balance loads may invalidate cache contents and, therefore, may increase memory access times.

53
New cards

Linux

uses the completely fair scheduler (CFS).

54
New cards

Completely Fair Scheduler (CFS)

Assigns a proportion of CPU processing time to each task. The proportion is based on the virtual runtime (vruntime) value associated with each task.

55
New cards

Windows scheduling

uses a preemptive, 32-level priority scheme to determine the order of thread scheduling.

56
New cards

Solaris

identifies six unique scheduling classes that are mapped to a global priority. CPU-intensive threads are generally assigned lower priorities (and longer time quantums), and I/O-bound threads are usually assigned higher priorities (with shorter time quantums.)