Threads and Scheduling

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/28

flashcard set

Earn XP

Description and Tags

Flashcards covering key concepts related to thread states, scheduling, and performance measures.

Last updated 10:20 AM on 4/24/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

29 Terms

1
New cards
Response Time
Time between a process's arrival in the ready queue and its first assignment to the CPU
2
New cards
Turnaround Time
Time between a process's arrival in the ready queue and its completion
3
New cards
Throughput
Number of processes that start and complete during the observation interval
4
New cards
Wait Time
Total time a process spends waiting for the CPU (includes response time)
5
New cards
Response Time Formula
R = t1
6
New cards
Wait Time Formula
W = R + t3
7
New cards
CPU Burst Time Formula
B = t2 + t4
8
New cards
Turnaround Time Formula
T = B + W
9
New cards
FCFS Throughput
Number of processes divided by total time units (e.g. 4 processes / 15 units = 0.267 proc/unit)
10
New cards
FCFS Response Time
Time from process arrival to first CPU assignment (non-preemptive — waiting in queue)
11
New cards
FCFS Wait Time
Response time plus any additional waiting during execution
12
New cards
FCFS Turnaround Time
CPU burst time plus wait time
13
New cards
Running (Thread State)
Thread is currently executing on the CPU
14
New cards
Ready (Thread State)
Thread is prepared to run and awaits CPU allocation
15
New cards
Blocked (Thread State)
Thread cannot proceed until an external event (e.g. I/O) completes
16
New cards
spawn
Create a new thread
17
New cards
block
Move a thread to blocked state (e.g. I/O request)
18
New cards
unblock
Return a thread to ready state after an event completes
19
New cards
finish
Thread terminates after execution
20
New cards
User-Level Thread (ULT)
Threads handled entirely by the application thread library — kernel does not know about individual threads
21
New cards
Kernel-Level Thread (KLT)
Threads managed directly by the OS kernel — kernel maintains context for each thread
22
New cards
Suspending a process
All of its threads become blocked
23
New cards
Terminating a process
All of its threads are finished
24
New cards
Process Running (thread states)
One thread is Running
25
New cards
Process Ready (thread states)
All threads are Ready awaiting CPU
26
New cards
Process Blocked (thread states)
All threads are Blocked e.g. waiting for I/O
27
New cards
Single-Threaded Server (RPC)
Process 1 blocks while waiting for RPC response — server handles request while other threads may be idle
28
New cards
One-Thread-Per-Server (RPC)
Each server runs its own thread — client thread blocks
29
New cards
Multithreaded Application (RPC)
Two client threads issue independent RPCs to different servers simultaneously — overlapping latency reduces total wait time