Module 2: Process and Process Management
Process Concept
A process = program in execution (also called job, task, unit of work)
Program = passive entity on disk; Process = active entity in memory
Execution must progress sequentially
Attributes of a process
Program (text / code segment)
Data (global & static variables)
Heap (dynamic allocation via
malloc,new, etc.)Stack (function parameters, return addresses, local vars)
State (new, ready, running, waiting, terminated)
Resources (open files, I/O devices, memory pages, etc.)
Logical view: each process feels it has its own virtual CPU
Reality: real CPU switches quickly between processes (context-switch)
One program may spawn multiple processes (e.g., many users run same editor)
Execution environments
Batch systems → jobs
Time-shared systems → user programs or tasks
CPU & I/O Burst Cycle
Programs normally alternate between CPU bursts (compute) and I/O bursts (waiting)
Sequence: CPU → I/O wait → CPU → … until completion
Understanding this alternation is crucial for designing scheduling algorithms that mix CPU-bound and I/O-bound workloads efficiently
Memory Layout of a Process
When loaded, memory is divided into 4 main sections (top ↓ bottom)
Stack (grows downward)
Heap (grows upward)
Data/BSS (static & global data)
Text (code)
Current activity is captured by the Program Counter (PC) and CPU registers
Process Life-Cycle & States
Common 5-state model
New / Start – process is being created
Ready – waiting in ready queue for CPU
Running – instructions executing on CPU
Waiting / Blocked – waiting for an event or resource (I/O, child, etc.)
Terminated / Exit – finished execution, awaiting cleanup
3-state view (Ready, Running, Waiting) with transitions:
Ready → Running: dispatcher selects process
Running → Ready: time-slice expired or pre-empted by higher priority
Running → Waiting: requests resource/I-O, needs to wait
Waiting → Ready: event completes
5-state detailed model adds New (admission) & Exit (completion)
Process Control Block (PCB)
Core kernel data structure per process (a.k.a. Task Control Block)
Typical fields
Identification: PID, PPID, UID, GID
Processor state: PC, CPU registers, PSW/EFLAGS, stack pointer
Scheduling info: priority, queue pointers, time slice
Memory-management info: page/segment tables, limits
I/O & file info: list of open files, devices, root & working dir
Accounting: CPU time used, start time, alarms, resource usage
Privileges & access control bits
Part of PCB must reside in main memory for OS to manage the process image
Process Creation
Reasons
System boot / initialization
Batch job submission
User login session
Service processes (e.g., print daemon)
Explicit user request (
fork,CreateProcess, etc.)Child processes spawned by parent — form a process tree
Steps
Assign unique PID
Allocate memory for process image
Initialize PCB with defaults (state = New)
Set up linkages in scheduling & parent/child lists
Load program (duplicate parent or exec new image)
UNIX model
fork()creates almost identical child (copy-on-write)exec()overlays child’s memory with new program
Process Termination
Causes
Normal completion (
exit,Halt)Error / fatal fault, arithmetic exception, segmentation violation
Exceeded resource/time limits
I/O failure, invalid or privileged instruction
Parent’s request, or cascading termination if parent exits
OS intervention (deadlock recovery, protection error)
After termination PCB kept temporarily for accounting; later reclaimed
Scheduling Fundamentals
Goal: maximize CPU utilization while meeting performance objectives
Queues
Job queue – all processes in system
Ready queue – resident in memory, ready to run
Device/I-O queues – waiting for specific devices
Processes migrate among these queues during life-cycle
Scheduling Criteria & Optimization
CPU Utilization: keep CPU busy (40-90 %)
Throughput: processes completed per unit time
Turnaround Time:
Waiting Time: total time spent in ready queue
Response Time: first response latency (important for interactive)
Optimization aims: maximize utilization & throughput, minimize turnaround, waiting, response
Levels of Scheduling
Long-Term (Job) Scheduler
Decides which jobs enter the system; controls degree of multiprogramming
Medium-Term Scheduler (also called Swapper)
Swaps processes to/from disk to control mix & memory usage
Short-Term (CPU) Scheduler / Dispatcher
Chooses among ready processes for next CPU burst; does context switch
Pre-emptive vs Non-Pre-emptive Scheduling
Non-Pre-emptive: running process keeps CPU until waits or terminates
Pre-emptive: OS may force context switch on arrival of higher-priority or on time-slice expiry
Events causing decisions
Running → Waiting (non-pre-emptive)
Running → Ready (pre-emptive)
Waiting → Ready (pre-emptive)
Running → Terminated (non-pre-emptive)
Single-Processor Scheduling Algorithms
First-Come, First-Served (FCFS)
Non-pre-emptive, simple FIFO order of arrival
Convoy effect: one long CPU-bound job can delay many short I/O-bound ones → poor utilization
Example (#1): P1=24, P2=3, P3=3 arrival P1,P2,P3
Waiting = ; Turnaround =
Example (#2): order P2,P3,P1
Waiting = 3, Turnaround = 13 (better)
Detailed 6-process example produced Avg CT ≈ 16.17, Avg WT = 11, Avg TAT ≈ 14.83
Shortest Job First (SJF)
Chooses process with smallest next CPU burst
Non-pre-emptive version; if tie → FCFS
Provides minimum average waiting time theoretically
Need to predict next CPU burst:
(0 ≤ ≤ 1) – exponential averagingPre-emptive form = Shortest Remaining Time First (SRTF/SRTN)
Non-pre-emptive Example (all arrive t=0): P3(1) → P2(4) → P4(5) → P1(6)
Waiting = 4; Turnaround = 8
Varied arrivals Example gives Waiting = 4, Turnaround = 8 (non-pre-emptive) and Waiting = 3, Turnaround = 7 (pre-emptive)
Pros: lowest averages; Cons: possible starvation of long jobs & need for accurate prediction
Priority Scheduling
Each process assigned an integer priority (smaller ⇒ higher)
Can be pre-emptive or non-pre-emptive
SJF is a special case where priority = predicted burst time
Starvation of low-priority processes mitigated by aging (gradually increase priority over time)
Two sources of priority
Internal: execution time, memory needs, #files
External: user importance, financial value, etc.
Round Robin (RR)
Pre-emptive time-sharing; FCFS order but each gets a quantum (1–100 ms typical)
After expires, process is pre-empted & placed at tail of ready queue
If CPU burst < , process releases CPU early (improves throughput)
Worst-case wait ≤ for n processes
Example (q = 20 ms) with 4 processes P1(53), P2(17), P3(68), P4(24)
Gantt: P1 → P2 → P3 → P4 → … (see transcript)
Avg Waiting ≈ 73, Avg Turnaround ≈ 113.5
Advantages: fairness, good response, no starvation; Downsides: many context switches, depends on quantum size
Multilevel Queue (MLQ)
Ready queue partitioned into multiple static queues by process type (e.g., system, interactive, batch)
Each queue has its own algorithm (e.g., foreground RR, background FCFS)
Inter-queue scheduling via fixed priority or time-slicing
Risk of starvation of lower queues unless time slice across queues is used
Example: 5 queues with absolute priority → lower queues run only when higher are empty
Multilevel Feedback Queue (MLFQ)
Like MLQ but processes can move between queues
Parameters to define
Number of queues & their individual algorithms
Quantum size per level (usually increasing downward)
Rules for demotion (e.g., uses full quantum → lower queue)
Rules for promotion (e.g., waits too long → higher queue) implementing aging
Example setup
Q1: RR, ms
Q2: RR, ms
Q3: FCFS
Scheduler executes all of Q1, then Q2, then Q3 unless pre-empted by arrival into higher queue
Alternative example from transcript
Q0 (RR 8 ms) → if not finished → tail of Q1 (RR 16 ms) → if not finished → Q2 (FCFS)
Dispatcher
Component of short-term scheduler that:
Saves and restores context
Switches to user mode
Jumps to correct location in user program
Dispatch latency: time it takes to stop one process & start another
Threads
Thread = lightweight unit of CPU utilization; shares process resources (code, data, files) but has its own PC, registers, stack
Advantages over processes
Responsiveness (UI remains active if one thread blocks)
Resource sharing is natural (no heavy IPC)
Economy (cheaper creation & context switch)
Scalability on multiprocessors
Single vs Multithreaded process illustration – multiple stacks inside same address space
User vs Kernel Threads
User threads: managed by user-level library (Pthreads, Windows threads API, Java threads)
Kernel threads: managed directly by OS (Windows, Linux, Solaris, macOS)
Multithreading Models
Many-to-One
Many user threads mapped to one kernel thread
Simple; can’t exploit multi-core; blocking affects all
Ex: Solaris Green Threads
One-to-One
Each user thread → separate kernel thread
High concurrency; more overhead; limits threads per process
Ex: Windows, Linux, Solaris 9+
Many-to-Many
Many user threads multiplexed over smaller/equal number of kernel threads
OS decides which kernel threads to create
Ex: Older Solaris, Windows ThreadFiber
Thread Libraries & OS Support
Implementation options
Entirely in user space (fast, but blocking issues)
Kernel-level library (system calls)
Windows thread structures
ETHREAD (kernel) → points to KTHREAD (sched/sync & kernel stack) → TEB (user data, TLS)
Linux uses
clone()system call to create tasks (threads) with selective sharing based on flags; represented bytask_struct
Ethical / Practical Implications & Real-World Relevance
Fair scheduling prevents starvation and supports user satisfaction in multi-tenant systems
Priority & aging decisions have policy consequences (e.g., real-time vs best-effort)
Multithreading must avoid data races; requires synchronization (mutex, semaphores)
Proper quantum selection in RR balances context-switch overhead vs responsiveness
PCB & thread structures hold sensitive info → need protection/isolation in kernel space to avoid privilege escalation