CSE 3430 Pt3

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/107

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

108 Terms

1
New cards

Operating System

Software (a program!) that converts hardware into a useful form for applications:

Make sure the system operates (1) correctly, (2) efficiently, and (3) in an easy to

use manner

2
New cards

Virtualization

Make the system behave so that each application can

run as though it has certain critical resources all to itself;run as though it has certain critical resources all to itself;

i.e., make the system work so that each aplication cani.e., make the system work so that each aplication can

run as though it hasrun as though it has exclusive useexclusive use of each criticalof each critical

resource in the system, such as the CPU, main memory,resource in the system, such as the CPU, main memory,

and disk (Make it look like there'sand disk (Make it look like there's no sharing, evenno sharing, even

though there always isthough there always is).).

 For example, make each application run as though itFor example, make each application run as though it

has exclusive use of the CPU, exclusive use of all ofhas exclusive use of the CPU, exclusive use of all of

main memory, and exclusive use of disk (even thoughmain memory, and exclusive use of disk (even though

no application has exclusive use of these

3
New cards

Concurrency

Many events are occurring simultaneously and may interact with oneare occurring simultaneously and may interact with one another

4
New cards

Persistence

Preserve information permanently (even when

power goes off)power goes off)

5
New cards

Process

An execution stream in the context of a

process execution state. It represents a running (active) program that the OS can pause and resume.

6
New cards

Execution Stream

• Stream (sequence) of executing instructions

• Running piece of code

7
New cards

Process Execution State

everything that the running code can affect or be affected by, including all registers, address space (code, heap, stack), and open files.

8
New cards

Process vs. Program

A program is static code and data stored on disk, while a process is a dynamic instance of that code and data executing in memory.

9
New cards

Process API (Interfaces/Actions)

• Create

• Destroy

• Wait (this means wait on another process to

finish

running)

10
New cards

Process Creation

The OS loads the program’s code and static data into memory, allocates stack and heap space, sets up I/O (stdin, stdout, stderr), and creates a process control block (PCB).

11
New cards

What must be done for a process to start execution

Once everything is in memory, the OS must schedule the process and start it at its entry point (the first instruction in main()).

12
New cards

Three Process States

Ready – everything needed to run is in memory, but not executing on the CPU

Running – process is executing instructions on the CPU

Blocked – process is waiting for an event (e.g., I/O, lock)

13
New cards

CPU Time-Sharing

only one process uses the CPU at a time, but the OS rapidly switches between processes to give each the illusion of having its own CPU

14
New cards

Direct Execution

process runs directly on hardware with no restrictions

15
New cards

Limited Direct Execution

process runs directly on hardware, but OS and hardware retain control via privilege levels, system calls, and interrupts

16
New cards

Problems with Direct Execution

Access memory or disk it should not,

Run forever without giving up the CPU,

Perform slow operations (e.g., I/O) and waste CPU time

17
New cards

How Limited Direct Execution Is Provided

Status bit / privilege levels (user mode vs. kernel mode)

System calls (traps)

Interrupts and exceptions

18
New cards

Interrupt

raised by hardware devices (e.g., timer, disk)

19
New cards

Exception

caused by the currently executing instruction

20
New cards

Trap (Type of Exception)

instruction does not re-execute after OS handles it (used for system calls, errors)

21
New cards

Fault (Type of Exception)

instruction re-executes after OS fixes the problem

22
New cards

Why the CPU Changes Mode from User to Kernel

Only the CPU can safely change the mode bit to kernel mode; otherwise user processes could change it themselves.

The CPU does this on interrupts and exceptions.

23
New cards

Returning from Kernel Mode to User Mode

The OS changes the mode bit back to user mode and simultaneously loads the next user instruction into the program counter

24
New cards

What User Processes cannot do

User processes cannot:

Perform general memory access

Perform disk I/O

Execute privileged instructions (e.g., changing the PC)

25
New cards

What Happens If a User Process Violates Restrictions

The CPU traps to the OS, and the process is terminated (e.g., segmentation fault).

26
New cards

How the CPU Is Taken Away from a Process

The OS regains control using a timer interrupt, which forces a switch back to kernel mode so the dispatcher can run.

27
New cards

Context Switch

saves the execution state of one process and restores another.

Saved/restored data includes all registers (PC, stack pointer, status register), stored in the PCB.

28
New cards

Problems with Cooperative (Voluntary) Multitasking

Processes may not voluntarily give up the CPU, leading to CPU hogging or system lockup.

No modern OS uses this approach

29
New cards

True Multitasking

guarantees the OS regains control using periodic timer interrupts, allowing fair CPU sharing among processes.

30
New cards

Ready to Running

process is scheduled on the CPU

31
New cards

Running to Ready

process is descheduled (e.g., time slice expires)

32
New cards

Running to Blocked

process initiates I/O or waits for an event

33
New cards

Blocked to Ready

I/O or event completes

34
New cards

Scheduler Optimization Parameters

Minimize:

Turnaround time – time from arrival to completion

Response time – time until process first runs

Waiting time – time spent in ready queue

Overhead – context-switch cost

Maximize:

Throughput – jobs completed per unit time

Resource utilization – keep CPU and disk busy

Fairness – equal CPU share over time

35
New cards

Turnaround Time

= completion_time − arrival_time

36
New cards

Response Time

= first_run_time − arrival_time

37
New cards

Waiting Time

= completion_time − arrival_time − run_time

38
New cards

FIFO/FCFS

run jobs in arrival order (non-preemptive)

39
New cards

SJF

run job with smallest run time (non-preemptive)

40
New cards

STCF

run job with least remaining time (preemptive)

41
New cards

RR

rotate jobs using fixed time slices (preemptive)

42
New cards

Convoy Effect

Short jobs wait behind long jobs

43
New cards

Convoy Effect (Schedulers subject to the convoy effect)

FIFO / FCFS

SJF

44
New cards

Why RR Gives Better Response Time

RR schedules processes quickly using short time slices, allowing interactive jobs to run early instead of waiting behind long jobs.

45
New cards

Time Quantum (Time Slice)

the fixed amount of CPU time a process may run before being preempted in RR scheduling.

46
New cards

When RR Is Worse Than FIFO

RR has worse average turnaround time than FIFO when all jobs have equal length due to frequent context switches.

47
New cards

I/O and CPU Scheduling

When a process performs I/O, it becomes blocked, and the scheduler runs another ready process so the CPU is not idle.

48
New cards

MLFQ (Multi-Level Feedback Queue)

uses multiple priority queues with round-robin scheduling at each level. Higher-priority queues preempt lower ones.

49
New cards

Use of History in MLFQ

MLFQ uses past behavior (CPU burst history) to predict future behavior and adjust process priority.

50
New cards

Goals of the MLFQ Scheduler

Good response time for interactive jobs

Good turnaround time for batch jobs

General-purpose scheduling without knowing run time

51
New cards

Problems with MLFQ and Solutions

Problems:

Inflexibility – processes that change behavior may be stuck at low priority

Starvation – low-priority jobs may never run

Solutions:

Priority boosting – periodically raise all jobs to top priority

52
New cards

Shared Data Storage

Shared data must be stored on the heap. It cannot be stored on the stack, because stacks are private to each thread.

53
New cards

Thread

a sequence of execution that is separate from other execution sequences within the same process.

54
New cards

Each Thread Has

A thread ID (tid)

A register set, including PC, IR, stack pointer, and data registers (e.g., rax)

55
New cards

How Multithreaded Processes Improve Performance

Multithreaded processes can run multiple threads simultaneously on different CPU cores, allowing the process to complete work faster.

56
New cards

Critical Section (Critical Section Code)

Code that accesses shared data

57
New cards

Mutual Exclusion

the requirement that at most one process or thread may execute critical section code at any given time.

58
New cards

Indeterminate Output

occurs when running the same program twice with the same input produces different output.

59
New cards

Why Mutual Exclusion Is Important

Guaranteeing mutual exclusion prevents multiple threads from accessing shared data simultaneously and prevents indeterminate output.

60
New cards

Binary Semaphore

stores one of two values: 0 or 1.

Its operations are:

semWaitB (P): blocks if value is 0; if 1, sets it to 0 and proceeds

semSignalB (V): wakes a blocked process if one exists; otherwise sets value to 1

61
New cards

semWaitB (P)

blocks if value is 0; if 1, sets it to 0 and proceeds

62
New cards

semSignalB (V)

wakes a blocked process if one exists; otherwise sets value to 1

63
New cards

Using a Binary Semaphore as a Lock

A binary semaphore initialized to 1 can act as a lock:

semWaitB is used to acquire the lock before entering the critical section

semSignalB is used to release the lock after leaving the critical section

This guarantees only one thread executes the critical section at a time

64
New cards

File

an array of persistent bytes that can be read and written. _____s are necessary because main memory, registers, and caches lose data when power is turned off.

65
New cards

Inode Number

Advantage: Simple and efficient for the OS

Disadvantage: Arbitrary and not meaningful to humans

66
New cards

Path (String) Name

Advantage: Human-readable and organized using directories

Disadvantage: Expensive due to directory traversal on every access

67
New cards

File Descriptor

Advantage: Efficient access using cached inode data in memory

Disadvantage: Only valid while the file is open

68
New cards

Block

the unit of allocation for files on disk.

69
New cards

Methods Used to Allocate Blocks for Files

Contiguous allocation

Small number of extents

Linked allocation

File-Allocation Table (FAT)

70
New cards

Contiguous Allocation

File stored in consecutive blocks

Metadata: starting block + file size

Fast sequential and random access

Suffers from external fragmentation, hard to grow

71
New cards

Small Number of Extents

File stored in several contiguous regions

Metadata: small array of (starting block, size) pairs

Reduces fragmentation, allows limited growth

72
New cards

Linked Allocation

File stored as a linked list of blocks

Metadata: first block; each block stores pointer to next

No external fragmentation, easy growth

Very poor random access, pointer overhead

73
New cards

File-Allocation Table

Linked allocation using a centralized table

Metadata: first block + FAT table

Faster random access than linked allocation

FAT can be cached in memory; table size scales with disk size

74
New cards

Motivations for Virtualization of Memory

Support multiprogramming (many processes at once)

Prevent processes from corrupting the OS or each other

Give each process the illusion of a private address space

75
New cards

Four Goals of Multiprogramming

Transparency – processes are unaware memory is shared

Protection & Privacy – processes cannot access others’ memory

Efficiency – minimize wasted memory (fragmentation)

Sharing – allow controlled sharing between cooperating processes

76
New cards

What Is in a Process Address Space

Code

Heap

Stack

77
New cards

Static vs. Dynamic Parts of Address Space

Dynamic: Heap, Stack

Static: Code

78
New cards

Time sharing (of memory)

Only one process is in main memory at a time; other processes are swapped to disk to avoid address conflicts. Does not work well because swapping on every context switch causes extremely poor performance

79
New cards

Dynamic relocation (base + bounds)

The MMU adds a base register to each logical address and checks it against a bounds register to ensure protection and correct relocation.

80
New cards

Segmentation

Divides a process’s address space into logical segments (code, heap, stack), each with its own base and bounds.

81
New cards

Paging

Divides memory into fixed-size pages and maps virtual pages to physical page frames using a page table

82
New cards

Advantages / Disadvantages of Base + Bounds

Advantages

Simple and fast

Provides protection

Supports relocation

Disadvantages

Requires contiguous allocation

Wastes unused space (heap–stack gap)

No partial sharing

83
New cards

Virtual to Physical Translation with Segmentation

Segment bits select base/bounds pair

Offset is checked against bounds

Offset is added to base to get physical address

If offset ≥ bounds → error

84
New cards

Advantages / Disadvantages of Segmentation

Advantages

Independent growth of heap and stack

Fine-grained protection

Supports sharing

Disadvantages

Each segment must be contiguous

External fragmentation

85
New cards

Page Size

Always 2ⁿ bytes (power of two)

86
New cards

Virtual Page vs. Physical Page Size

They must be the same size

87
New cards

Fundamental Idea of Paging

Break address space and physical memory into fixed-size pages

Eliminate need for contiguous allocation

88
New cards

How Paging Works

Virtual address → VPN + offset

VPN mapped to physical frame via page table

Offset copied directly

89
New cards

Virtual Address Bits with Paging

Most significant bits: Virtual Page Number (VPN)

Least significant bits: Page Offset

90
New cards

Fragmentation Eliminated by Paging

External fragmentation

91
New cards

Fragmentation Still Possible with Paging

Internal fragmentation

92
New cards

VPN to PPN Translation

Performed using the page table

93
New cards

Page Table

Data structure that maps virtual page numbers to physical page frames

94
New cards

Virtual to Physical Address Translation

Use VPN to index page table

Get physical frame number

Append offset to form physical address

95
New cards

Page Tables Storage

In main memory (OS memory)

96
New cards

Data Stored in Page Table Entries (PTEs)

Physical frame number

Valid bit

Present bit

Protection bits (r/w/x)

Reference bit

Dirty bit

97
New cards

Advantages of Paging

No external fragmentation

Easy allocation and freeing

Supports virtual memory

Allows partial process residency in RAM

98
New cards

Disadvantages of Paging

Internal fragmentation

Large memory overhead for page tables

99
New cards

How Paging Allows More Virtual Memory Than RAM

Only small portions of each process’s address space are in RAM

Typically 2–3% of a process is resident at a time

100
New cards

Paging In

Copying a page from disk to RAM so a process can access it