CSC246 - Final Quick Notes

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Get a hint
Hint

Reader-Writer Problem

Get a hint
Hint

A classic synchronization problem where multiple threads access shared data:

  • Readers only read the data (no modification).

  • Writers modify the data.

Goal:
Allow multiple readers to access simultaneously,
but ensure exclusive access for writers (no readers or other writers at the same time)

Get a hint
Hint

deadlock

Get a hint
Hint

occurs when a group of processes are each waiting for resources that are held by the others — creating a cycle of dependencies that prevents any of them from continuing

Card Sorting

1/178

Anonymous user
Anonymous user
encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

179 Terms

1
New cards

Reader-Writer Problem

A classic synchronization problem where multiple threads access shared data:

  • Readers only read the data (no modification).

  • Writers modify the data.

Goal:
Allow multiple readers to access simultaneously,
but ensure exclusive access for writers (no readers or other writers at the same time)

2
New cards

deadlock

occurs when a group of processes are each waiting for resources that are held by the others — creating a cycle of dependencies that prevents any of them from continuing

3
New cards

Processes stop making progress, Resources are wasted, System may freeze or slow down

why is a deadlock undesirable

4
New cards

Mutual Exclusion, Hold and Wait, No Preemption, Circular Wait

Four Necessary Conditions for Deadlock

5
New cards

Deadlock Possibilities in Multithreaded Code

  • When two threads try to lock something the opposite thread already locked

  • Also happens with nested locks, or when lock ordering is inconsistent across threads.

  • Example: Thread A locks mutex1, then tries to lock mutex2 while Thread B has mutex2 and is trying to lock mutex1 → deadlock.

  • Common with mutexes or semaphores.

6
New cards

Deadlock Detection

Allow deadlocks to happen, but have a way to find them and fix the problem afterward, Flexible, better resource usage

7
New cards

deadlock avoidance

Let processes request resources in ways that could cause deadlock, but control things carefully so deadlock never actually happens, complex and costly to track state

8
New cards

deadlock prevention

Design the system so that deadlocks can’t happen at all by avoiding the conditions that cause them, can hurt performance and fairness

9
New cards

Make resources shareable if possible

deadlock prevention technique to address Mutual Exclusion

10
New cards

require all resources requested at once

deadlock prevention technique to address Hold and wait

11
New cards

Allow OS to forcefully take resources

deadlock prevention technique to address no preemption

12
New cards

forcing a specific order for requesting resources

deadlock prevention technique to address circular wait

13
New cards

Mutual exclusion

Some resources (like a printer) can only be used by one process at a time

14
New cards

hold and wait

A process is holding some resources and is waiting for more

15
New cards

no preemption

The OS can’t take resources away from a process. It has to wait until the process gives them up.

16
New cards

circular wait

There's a loop of processes, each waiting for a resource that the next one has

17
New cards

yes (often necessary, not avoidable)

Appropriateness of Prevention Technique mutual exclusion

18
New cards

no (inefficient, causes resource hogging or starvation)

Appropriateness of Prevention Technique hold and wait

19
New cards

no (risky, can’t safely take a printer mid-job)

Appropriateness of Prevention Technique no preemption

20
New cards

yes (Practical, widely used via resource ordering)

Appropriateness of Prevention Technique circular wait

21
New cards

are in the deadlock cycle, hold many resources, hasn’t done much work yet, appear in multiple cycles

when choosing a processes to terminate to eliminate deadlock, choose processes that

22
New cards

safe state

a system can allocate resources to all processes in some order without leading to deadlock, OS can guarantee completion

23
New cards

unsafe state

The system is not in deadlock yet, but there's no guarantee that all processes can complete. Deadlock might occur depending on how processes behave. The OS can’t ensure safety anymore.

24
New cards

Deadlocked State

A set of processes are stuck waiting for resources that will never be released. The system has entered a state where no progress is possible for some or all processes.

25
New cards

Banker’s Algorithm

a deadlock avoidance method that checks whether granting a resource request will keep the system in a safe state before approving it.

26
New cards

Compile time binding

Memory addresses are chosen when the program is compiled

27
New cards

simple and fast to run

Compile time binding advantages

28
New cards

Must load into the exact memory location it was compiled for; inflexible.

Compile time binding disadvantages

29
New cards

Load-Time Binding

Addresses are chosen when the program is loaded into memory

30
New cards

Can run in different memory locations

Load-Time Binding advantages

31
New cards

Once it starts, the program can’t move

Load-Time Binding disadvantages

32
New cards

Execution-Time Binding

memory addresses are chosen as the program runs

33
New cards

Most flexible (programs can move in memory during execution)

Execution-Time Binding advantages

34
New cards

slow and Requires extra hardware

Execution-Time Binding disadvantages

35
New cards

static linking

All libraries are added directly to the executable at build time, Program is self-contained—no dependencies at runtime, larger file size

36
New cards

dynamic linking

Library code is linked later, at load or runtime, Executable is smaller and can use up-to-date shared libraries, need OS support

37
New cards

Shared Libraries

One library file is shared across many programs, saves memory and storage—only one copy in memory and on disk, needs careful management from the OS to avoid conflicts

38
New cards

Dynamic Loading

Only load code (like a subroutine) when it’s first needed, saves memory and makes startup faster, can slow things down the first time a routine is used

39
New cards

Swapping

when the OS moves a whole process out of memory (to disk) to free up space for other processes

40
New cards

disk (Stores the swapped-out process's memory), CPU and MMU (coordinate the swap)

hardware involved with swapping

41
New cards
  • memory is full and processes need space,

  • low priority processes are chosen to be swapped out

  • PCB stays in memory so the process can resume

  • transfer time is the biggest cost (swapping large processes takes time)

when would swapping be performed

42
New cards

contiguous allocation

Each process gets one big chunk of memory (a ____ block). Managed using base and limit registers. Simple to implement, but leads to fragmentation problems

43
New cards

first-fit

Allocate the first hole big enough

  • fast

  • can leave many small unusable holes

44
New cards

best-fit

use the smallest hole that still works

  • reduces wasted space

  • slower search, leftover hole may be too small

45
New cards

worst-fit

use the biggest hole

  • may leave usable leftovers

  • can still waste space badly

46
New cards

next-fit

Allocate the first hole big enough, start the search from where you left off last time

47
New cards

50-percent rule

when using first-fit to allocate memory, for every chunk of memory used, about half that size gets wasted to external fragmentation, so around one-third of all memory can't actually be used

48
New cards

External fragmentation

occurs when free memory is scattered in small chunks between allocated blocks, making it impossible to fit larger processes even though there’s enough total free space, caused by dynamic memory allocation (first-fit, best fit strategies)

49
New cards

Internal fragmentation

when a process is given slightly more memory than it needs, leaving unused space inside the allocated block that can’t be used by others

50
New cards

logical address

also called a virtual address, address generated by the CPU when a program runs.
It's what the program thinks it's using

51
New cards

physical address

The actual location in main memory (RAM) where data is stored.
It’s controlled by the Operating System and hardware.

52
New cards

address translation

lets a process use its own virtual addresses without knowing where it's actually stored in physical memory, making memory management more flexible, secure, and efficient

53
New cards
  • Uses virtual instead of physical addresses

  • Hides actual memory locations

  • Supports moving processes in memory

  • Improves memory protection

  • Enables paging and segmentation

purpose of address translation

54
New cards

Dynamic memory allocation (like first-fit, best-fit strategies), in systems that require contiguous memory

External fragmentation caused by

55
New cards

Fixed-size partitioning, Paging

internal fragmentation caused by

56
New cards

Single-level paging

a memory management scheme where a logical address is split into a page number and offset, with the page number used to look up a frame number in the page table, and the frame number combined with the offset to find the physical address

57
New cards

multi-level paging

breaks a logical address into multiple page numbers and an offset, using the first page number to access a page directory and the next to access a page table, which helps save memory by only creating page tables when needed—especially in large address spaces

58
New cards

valid bit

Indicates whether the page is currently mapped to a physical frame in memory (valid) or not (invalid)

59
New cards

Read-only bit

Specifies if the page is protected against writing, allowing only read access (read-only)

60
New cards

Modified (dirty) bit

Marks if the page has been modified (written to) since it was loaded into memory

61
New cards

Reference (used) bit

Tracks whether the page has been accessed (read or written to) recently, helping the OS manage page replacement

62
New cards

page table creation

  • Step 1: OS allocates a page table when a process starts.

  • Step 2: Sets PTBR to point to the page table.

  • Step 3: Fills entries on demand as memory is accessed.

63
New cards

shared memory

The OS lets multiple processes point to the same memory to save space, often for read-only data like code

64
New cards

(page table built for) Copy-on-Write (COW)

After a fork, the parent and child share memory until one writes, triggering the OS to copy the page and give them separate versions.

65
New cards

Growing logical address space

The OS allows parts like the stack or heap to grow by adding pages only when they’re actually needed

66
New cards

TLB (Translation Lookaside Buffer)

a small, fast cache that stores recent page table entries to speed up address translation

67
New cards

What TLB does

reduces the time needed for each memory access by avoiding a full page table lookup when the translation is already cached

68
New cards

Deriving EMAT (effective memory access time)

If memory access time is m and the TLB hit ratio is α, then EMAT = αm + (1 - α)(2m) for single-level paging, since a TLB hit takes one access and a miss takes two; for two-level paging, a TLB miss takes three accesses, so EMAT = αm + (1 - α)(3m)

69
New cards

consumes excessive memory (especially for sparse address spaces, as it needs an entry for every page in the virtual address space)

problem with very large single-level page table

70
New cards

page-table length register (how it Alleviates Page Table Size issue)

stores the size of the page table, allowing the operating system to know where the valid entries end. This prevents the OS from having to allocate memory for the maximum possible number of pages, reducing unnecessary memory usage for small processes

71
New cards

multi-level paging (how it Alleviates Page Table Size issue)

breaks the page table into smaller parts (like a tree structure), so only the needed parts of the page table are kept in memory. This reduces memory usage by avoiding allocation for unused virtual address space.

72
New cards

Hashed Page Tables (how it Alleviates Page Table Size issue)

use a hash function to index into the page table, allowing the table to be sparse and much smaller. This is especially effective for large address spaces with few used pages, reducing the size and lookup time.

73
New cards

Inverted Page Tables (how it Alleviates Page Table Size issue)

Instead of keeping an entry for every possible virtual page, it keeps one entry for each physical memory frame. This means the table’s size depends on how much real memory you have, not how big the virtual memory is. So, it uses much less memory for the page table, even when the virtual address space is huge.

74
New cards

Demand Paging

a memory management scheme where pages of a program are loaded into memory only when they are required (i.e., on-demand). This reduces memory usage because only the pages actively needed by a program are loaded into physical memory, rather than the entire program

75
New cards

what demand paging does

When a program accesses a page that is not in memory (a page fault occurs), the operating system brings the page into memory from secondary storage (usually a disk).

76
New cards
  1. Page Fault Occurs (The process tries to access a page that is not in memory)

  2. Trap to the OS (The hardware triggers an interrupt (trap), and the OS takes control)

  3. Check for Valid Access (The OS checks if the access is valid)

    • if invalid - process terminated

  4. Find a Free Frame (The OS checks for free frames in memory. If none are available, it must decide which page to evict)

  5. Swap Page from Disk (The OS loads the page from the backing store (disk) into memory)

  6. Update Page Table (The OS updates the page table to reflect that the page is now in memory and marks it as valid)

  7. Resume Process

Steps that Occur on Page Fault

77
New cards

Pure Demand Paging

a variant of demand paging where no pages are loaded at program startup. Pages are only loaded when needed, reducing startup time and allowing more processes to run with limited memory.

78
New cards

FIFO (first in, first out)

Replaces the oldest loaded page first, easy to implement but can evict useful pages, causing more page faults

79
New cards

Optimal page replacement

Replaces the page not needed for the longest future time, offering the lowest possible page fault rate but needs impossible future knowledge.

80
New cards

LRU (Least Recently Used)

Replaces the least recently used page, performing close to optimal and easy to approximate but requiring costly tracking of access history.

81
New cards

Second Chance

Enhances FIFO (first in first out) by giving recently accessed pages a second chance, smarter than FIFO and effective in practice, less accurate than LRU

82
New cards

Belady's anomaly

Sometimes, adding more memory frames can surprisingly cause more page faults with certain algorithms like FIFO.

83
New cards

stack algorithm

An algorithm where adding more frames never causes more page faults because the pages in fewer frames (f frames) are always a subset of those that can be held with f+1 frames,

84
New cards

Stack Algorithms Avoid Belady’s Anomaly

____ avoids _____ by always keeping more pages when given more memory, without messing up the page replacement order

85
New cards

Local Page Replacement

Each process has its own set of frames, and pages are replaced only within that process's allocated memory.

86
New cards

Global Page Replacement

A single pool of frames is shared by all processes, and pages can be replaced across processes

87
New cards

Local Page Replacement (advantages/disadvantages)

Prevents processes from affecting each other’s memory, can waste memory if a process has more frames than needed

88
New cards

Global Page Replacement (advantages/disadvantages)

Uses memory more efficiently by sharing frames, one process can cause page faults in others

89
New cards

trashing

happens when the system spends more time swapping pages than running programs, causing it to slow down badly

90
New cards

what causes thrashing

too many processes competing for too little memory or by giving processes too few frames

91
New cards

what remedies thrashing

giving processes more frames, using better replacement algorithms, reducing the number of processes, or managing working sets more carefully

92
New cards

working set model

A process’s working set is the set of pages it is actively using, and keeping the full set in memory helps it run efficiently as the set changes over time.

93
New cards

Localities

Programs show temporal locality (recent data used again soon) and spatial locality (nearby data accessed together).

94
New cards

Arrays

have good spatial locality because elements are stored next to each other in memory

95
New cards

Linked lists

generally have poor spatial locality because their nodes can be scattered across memory

96
New cards

hash tables

can have mixed locality, depending on the implementation — accessing random buckets can hurt spatial locality

97
New cards

sorting algorithm

usually have good locality because they tend to work on nearby data

98
New cards

Matrix operation, copying row by row

good locality (because memory is laid out in rows)

99
New cards

Matrix operation, copying column by column

bad for locality (jumps across non-adjacent memory)

100
New cards

matrix multiplication

can be optimized for locality by reordering loops or using block-based multiplication