LQ2 - Operating Systems - CPU Scheduling

studied byStudied by 0 people
0.0(0)
learn
LearnA personalized and smart learning plan
exam
Practice TestTake a test on your terms and definitions
spaced repetition
Spaced RepetitionScientifically backed study method
heart puzzle
Matching GameHow quick can you match all your cards?
flashcards
FlashcardsStudy terms and definitions

1 / 55

flashcard set

Earn XP

Description and Tags

56 Terms

1

CPU–I/O Burst Cycle

consists of a cycle of CPU execution and I/O wait

New cards
2

false

short bursts are more frequent than long bursts

True or False: there are a small number of short bursts, while there are a large number of long bursts

New cards
3

CPU scheduler

selects from among the processes in ready queue, and allocates a CPU core to one of them

New cards
4

nonpreemptive

If scheduling takes place only under the following circumstances: (1) switches from running to waiting state, and (2) terminates; what scheduling scheme does it use?

New cards
5

Dispatcher

this module gives control of the CPU to the process selected by the CPU scheduler. This involves switching context, switching to user mode, and jumping to the proper location in the user program to restart that program.

New cards
6

Dispatch latency

time it takes for the dispatcher to stop one process and start another running

New cards
7

Throughput

Number of processes that complete their execution per time unit

New cards
8

Turnaround time

Amount of time to execute a particular process

New cards
9

Waiting time

Amount of time a process has been waiting in the ready queue

New cards
10

Response time

Amount of time it takes from when a request was submitted until the first response is produced.

New cards
11

Shortest-Job First (SJF) Scheduling

Associate with each process the length of its next CPU burst. Use these lengths to schedule the process with the shortest time.

New cards
12

Shortest-Job First (SJF) Scheduling

is optimal—gives minimum average waiting time for a given set of processes

New cards
13

shortest-remaining-time-first

Preemptive version of SJF

New cards
14

Round Robin (RR)

Each process gets a small unit of CPU time (time quantum q), usually 10-100 milliseconds. After this time has elapsed, the process is preempted and added to the end of the ready queue.

New cards
15

Priority Scheduling

  • A priority number (integer) is associated with each process

  • The CPU is allocated to the process with the highest priority (smallest integer highest priority)

New cards
16

Starvation

a problem with priority scheduling where low priority processes may never execute

New cards
17

Aging

as time progresses, increase the priority of the processes

New cards
18

Priority Scheduling with Round-Robin

Run the process with the highest priority. Processes with the same priority run round-robin.

New cards
19

Multilevel Queue

With priority scheduling, have separate queues for each priority.

New cards
20

Multilevel Feedback Queue

A process can move between the various queues.

New cards
21

Symmetric multiprocessing (SMP)

is where each processor is self scheduling

New cards
22

Multiple-Processor Scheduling

CPU scheduling with multiple CPUs available

New cards
23

Multicore Processors

  • Recent trend to place multiple processor cores on same physical chip

  • Faster and consumes less power

  • Multiple threads per core also growing

    • Takes advantage of memory stall to make progress on another thread while memory retrieve happens

New cards
24

Multithreaded Multicore System

  • Each core has >1 hardware threads

  • If one thread has a memory stall, switch to another thread

New cards
25

Chip-multithreading (CMT)

assigns each core multiple hardware threads. (Intel refers to this as hyperthreading.)

New cards
26

load scheduling

If SMP needs to keep all CPUs loaded for efficiency, ____ ____ attempts to keep workload evenly distributed.

New cards
27

Push migration

periodic task checks load on each processor, and if found pushes task from overloaded CPU to other CPUs

New cards
28

Pull migration

idle processors pulls waiting task from busy processor

New cards
29

Soft affinity

the operating system attempts to keep a thread running on the same processor, but no guarantees.

New cards
30

Hard affinity

allows a process to specify a set of processors it may run on.

New cards
31

Multiple-Processor Scheduling - Processor Affinity

When a thread has been running on one processor, the cache contents of that processor stores the memory accessed by that thread.

New cards
32

NUMA-aware

If the operating system is _______, it will assign memory closest to the CPU the thread is running on.

New cards
33

Soft real-time systems

A real time system where critical real-time tasks have the highest priority, but no guarantee as to when tasks will be scheduled.

New cards
34

Hard real-time systems

A real time system where task must be serviced by its deadline.

New cards
35

runqueue

For the Linux scheduling version 2.5, all run-able tasks tracked in per-CPU ____ data structure

New cards
36

Two scheduling classes in Linux scheduling version 2.6.23+

  • default

  • real-time

New cards
37

virtual run time, vruntime

CFS scheduler maintains per task ____ ____ ____ in variable _____

New cards
38

The Linux Completely Fair Scheduler (CFS)

provides an efficient algorithm for selecting which task to run next. Each runnable task is placed in a red-black tree

New cards
39

red-black tree

a balanced binary search tree whose key is based on the value of vruntime.

New cards
40

Scheduling domain

is a set of CPU cores that can be balanced against one another.

New cards
41

Idle thread

In Windows scheduling, If no run-able thread, runs ____ ___

New cards
42

User-mode scheduling (UMS)

Windows 7 added ____ ____:

  • Applications create and manage threads independent of kernel

  • For large number of threads, much more efficient

  • They come from programming language libraries like C++ Concurrent Runtime (ConcRT) framework

New cards
43

preemptive, nonpreemptive

Scheduling algorithms may be either ___ (where the CPU can be taken away from a process) or ___ (where a process must voluntarily relinquish control of the CPU). Almost all modern operating systems are preemptive.

New cards
44

CPU scheduling

is the task of selecting a waiting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher.

New cards
45

First-come, first-served (FCFS) scheduling

is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes.

New cards
46

Shortest-job-first (SJF)

scheduling is demonstrably optimal, providing the shortest average waiting time. Implementing this algorithm is difficult because it is hard to predict the length of the next CPU burst.

New cards
47

Round-robin (RR) scheduling

allocates the CPU to each process for a time quantum. If the process does not relinquish the CPU before its time quantum expires, the process is preempted, and another process is scheduled to run for a time quantum.

New cards
48

Priority scheduling

assigns each process a priority, and the CPU is allocated to the process with the highest priority. Processes with the same priority can be scheduled in FCFS order or using RR scheduling.

New cards
49

Multilevel queue scheduling

partitions processes into several separate queues arranged by priority, and the scheduler executes the processes in the highest-priority queue. Different scheduling algorithms may be used in each queue.

New cards
50

Multilevel feedback queues

are similar to multilevel queues, except that a process may migrate between different queues.

New cards
51

Multicore processors

place one or more CPUs on the same physical chip, and each CPU may have more than one hardware thread. From the perspective of the operating system, each hardware thread appears to be a logical CPU.

New cards
52

Load balancing

on multicore systems equalizes loads between CPU cores, although migrating threads between cores to balance loads may invalidate cache contents and, therefore, may increase memory access times.

New cards
53

Linux

uses the completely fair scheduler (CFS).

New cards
54

Completely Fair Scheduler (CFS)

Assigns a proportion of CPU processing time to each task. The proportion is based on the virtual runtime (vruntime) value associated with each task.

New cards
55

Windows scheduling

uses a preemptive, 32-level priority scheme to determine the order of thread scheduling.

New cards
56

Solaris

identifies six unique scheduling classes that are mapped to a global priority. CPU-intensive threads are generally assigned lower priorities (and longer time quantums), and I/O-bound threads are usually assigned higher priorities (with shorter time quantums.)

New cards

Explore top notes

note Note
studied byStudied by 42 people
932 days ago
5.0(1)
note Note
studied byStudied by 38 people
68 days ago
5.0(2)
note Note
studied byStudied by 35 people
679 days ago
4.0(2)
note Note
studied byStudied by 408 people
326 days ago
5.0(2)
note Note
studied byStudied by 40 people
883 days ago
5.0(2)
note Note
studied byStudied by 11 people
390 days ago
5.0(1)
note Note
studied byStudied by 3 people
769 days ago
5.0(1)
note Note
studied byStudied by 137 people
15 days ago
5.0(1)

Explore top flashcards

flashcards Flashcard (30)
studied byStudied by 2 people
467 days ago
5.0(1)
flashcards Flashcard (87)
studied byStudied by 268 people
384 days ago
4.0(4)
flashcards Flashcard (28)
studied byStudied by 6 people
29 days ago
5.0(1)
flashcards Flashcard (73)
studied byStudied by 11 people
124 days ago
5.0(1)
flashcards Flashcard (45)
studied byStudied by 14 people
719 days ago
5.0(2)
flashcards Flashcard (136)
studied byStudied by 3 people
143 days ago
5.0(1)
flashcards Flashcard (47)
studied byStudied by 37 people
129 days ago
5.0(1)
flashcards Flashcard (130)
studied byStudied by 4 people
4 days ago
5.0(1)
robot