C3

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/22

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

23 Terms

1
New cards

What 4 transitions in process state can occur?

Process state changes happen like this:

  • Running ➔ Blocked

  • Blocked ➔ Ready

  • Ready ➔ Running

  • Running ➔ Ready

    Processes move between states because of I/O or because the OS (scheduler) makes decisions.

2
New cards

When does Running ➔ Blocked occur?

When the process needs to do I/O (like reading from a disk)

3
New cards

When does Blocked ➔ Ready occur?

When the I/O finishes and the process is ready to run again.

4
New cards

When does Ready ➔ Running occur?

When the scheduler picks the process to run.

5
New cards

When does Running ➔ Ready occur?

When the scheduler decides to pause the process to let someone else run.

6
New cards

Which parameters may a scheduler attempt to minimize?

Schedulers try to:

  • Minimize:

    • Turnaround time: how long it takes to finish a process.

    • Response time: how fast the process starts running after it’s ready.

    • Wait time: how long a process sits waiting in the ready queue.

In short:
Schedulers want processes to finish fast and keep everything busy and fair.

7
New cards

Which may a scheduler attempt to maximize?

Schedulers try to:

  • Maximize:

    • Throughput: how many processes finish in a certain time.

    • Resource utilization: how busy the CPU and devices are (no wasting time).

    • Fairness: making sure all processes get a fair share of CPU time.

In short:
Schedulers want processes to finish fast and keep everything busy and fair.

8
New cards

What is the formula for turnaround time?

Turnaround time = completion_time – arrival_time

"Done minus Came"

9
New cards

What is the formula for response time?

Response time = initial_schedule_time – arrival_time

"First Run minus Came"

10
New cards

What is the formula for waiting/wait time?

Wait time = completion_time – arrival_time – run_time

"Turnaround minus Actual Work"

11
New cards

Be sure you understand the operation of the following scheduler: FIFO/FCFS? Which of these schedulers are non-preemptive? Which are preemptive

FIFO stands for First-In, First-Out, also known as FCFS (First-Come, First-Served). FIFO/FCFS is non-preemptive, meaning once a process starts running, it continues until it completes I/O or finishes its execution.

12
New cards

Be sure you understand the operation of the following scheduler: SJF? Which of these schedulers are non-preemptive? Which are preemptive

SJF stands for Shortest Job First. SJF is non-preemptive and selects the process with the smallest run_time to execute.

13
New cards

Be sure you understand the operation of the following scheduler: STCF? Which of these schedulers are non-preemptive? Which are preemptive

STCF stands for Shortest Time-to-Completion First. STCF is preemptive, always running the process that will complete the quickest, even if it preempts the currently running process.

14
New cards

Be sure you understand the operation of the following scheduler: RR? Which of these schedulers are non-preemptive? Which are preemptive

RR stands for Round Robin. RR is preemptive, alternating between ready processes using a fixed-length time-slice or time quantum.

15
New cards

Which of the schedulers above are subject to the convoy effect?

FIFO (First-In, First-Out), also known as FCFS (First-Come, First-Served). This effect occurs when a long-running job arrives first, causing shorter jobs behind it to wait for an extended period, thus increasing their turnaround time

16
New cards

Why can an RR scheduler provide better response time (assuming an appropriate time quantum)?

RR schedulers can provide better response time because they alternate between all ready processes using a fixed-length time-slice (or time quantum). This ensures that even if there are many processes, each process gets a turn to run relatively quickly after it arrives, leading to a lower initial_schedule_time and thus a better response_time (initial_schedule_time – arrival_time). With an appropriate time quantum (e.g., 10-20 ms), even with many running processes, the response time can remain below a second, which is generally considered satisfactory for users

17
New cards

What is a time quantum/time slice (in RR scheduling)?

In RR (Round Robin) scheduling, a time quantum, also known as a time-slice, is a fixed length of time for which each ready process is allowed to run on the CPU. After a process has run for its allocated time quantum, the scheduler preempts it and gives the CPU to the next process in the ready queue. A common range for the time quantum is 10 to 20 ms

18
New cards

In what way is an RR scheduler usually worse than FIFO/FCFS (with equal length jobs)?

An RR scheduler is usually worse than FIFO for equal length jobs because of the overhead of context switching. While FIFO runs each equal-length job to completion without interruption, RR divides the CPU time into quanta, forcing context switches between the jobs. These extra context switches increase the total execution time and thus result in a worse average turnaround time compared to FIFO for jobs that require the same amount of CPU time

19
New cards

How do I/O system calls by a process interact with CPU scheduling?

When a process initiates an I/O operation, it makes an I/O system call and typically transitions from the Running state to the Blocked state. This action causes the process to relinquish the CPU, allowing the scheduler to select another process from the Ready queue to execute. Once the I/O operation is completed, the process will transition back to the Ready state, waiting for the scheduler to allocate CPU time to it again

20
New cards

Understand the basic operation of the MLFQ scheduler discussed in the class slides and in class.

The MLFQ (Multi-Level Feedback Queue) scheduler uses multiple queues with different priority levels.

  • Processes start at the top (highest priority).

  • If a process uses up all its time slice, it moves down to a lower-priority queue.

  • Each lower queue gives longer time slices, but lower priority.

This way:

  • Fast, interactive processes stay near the top (for quick response).

  • Long, batch processes sink lower over time.

To be fair, the system sometimes boosts all processes back to higher queues so that no process gets stuck forever (prevents starvation).

21
New cards

How is “history” used by the MLFQ scheduler?

The MLFQ scheduler looks at how a process behaves in the past to decide its priority.

  • If a process doesn’t use much of its time slice (likely an interactive process), it stays in a high-priority queue.

  • If a process uses all its time slice (likely a long-running task), it gets moved to a lower-priority queue.

The system adjusts priorities based on how processes act to make sure everything runs efficiently.

22
New cards

What are the goals of the MLFQ scheduler?

The main goal of the MLFQ scheduler is to balance two types of processes:

  • Interactive programs that need fast response times.

  • Batch programs that need quick completion.

It does this by using multiple queues with different priorities and a round-robin system within each queue, so each process gets the right amount of time to run based on its needs.

23
New cards

What are the potential problems with the MLFQ scheduler? (inflexibility, starvation) How can each type of problem be addressed?

The MLFQ scheduler can be too rigid because a process that starts as a long task (batch job) might stay stuck in a low-priority queue even if it becomes interactive later, leading to poor response time.
To fix this, the scheduler can boost the priority of all jobs now and then, so they get a better chance.

Another problem is starvation, where low-priority jobs might never get to run.
To prevent this, the scheduler can boost the priority of jobs that haven’t run in a while.