OS Deadlocks and Memory Management

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/251

flashcard set

Earn XP

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

252 Terms

1
New cards

Deadlock

When processes cannot proceed due to resource contention

2
New cards

4 Conditions of Deadlock

  1. Mutual exclusion
  2. Hold and wait
  3. No preemption
  4. Circular wait
3
New cards

Mutual Exclusion

  • At least one non-shareable resource
  • Additional requests for resource delayed
  • Back and forth of "Who's gonna access it?"
4
New cards

Hold and Wait

  • Process holds at least one resource

  • Waiting to get resources held by other processes

EX. P1 holding R1, P1 wants R2 but it's held by P2.
For P1 to access R2, P2 should release R2. If too much waiting, deadlock

5
New cards

No Preemption

  • Resources only released by process after completing its task

EX. P1 wants R2, but if OS is non-preemptive, it has to wait for P2 to release R2

6
New cards

Circular Wait

  • Set of processes { P0, P1, …, Pn }
  • P0 waiting for P1, P1 waiting for P2, … Pn-1 waiting for Pn
  • Pn waiting for P0
7
New cards

System Resource Allocation Graphs

Graph representing processes and resources with directed edges.

8
New cards

System Resource Allocation Graphs Symbols

  1. Processes - Circle

  2. Resources - Square w/ dots

  3. Request Pi 🡪 Rj

  4. Assignment Rj 🡪 pi

9
New cards

Methods for Handling Deadlocks

  1. Prevention / Avoidance - Ensure system cannot have deadlocks
  2. Allow deadlocks then recover
  3. Ostrich Algorithm - Ignore deadlocks
10
New cards

Deadlock Prevention

  • Eliminate at least one condition

Disadvantages:

  • Low resource utilization

  • Less throughput (keep getting interrupted)

11
New cards

Prevention for Mutual exclusion

  • Needed for non-shareable resources but not for shareable ones

  • Can't prevent mutex for all resources: some intrinsically non-shareable

12
New cards

Prevention for Hold and wait

  • Guarantee that process does not hold a resource when requesting another

EX. allocate all resource before execution

13
New cards

Prevention for No preemption

  • When request not granted, process must release all resources it's holding

  • Restarted only if it regains old and new resources

14
New cards

Prevention for Circular wait

  • Impose ordering of resources

  • Require process to request resource in increasing number

  • If process holds Ri, it can request another, Rj > Ri

  • If process holds Rj and requests Ri, it must release Rj

15
New cards

Safe state

  • May allocate resources (up to maximum) in some order with no deadlock
  • Over-allocation: system unsafe
16
New cards

Deadlock Avoidance

  • Needs information on how processes will request resources
  • Includes claim edge (future request) denoted by P ---> R
  • Request granted only if no cycle is formed
  • Single instance resources only
17
New cards

Banker's Algorithm

  • Ensure bank doesn't over-allocate cash
  • Check if still in safe state to allocate resources
  • If not safe, won't allocate since it might enter an unsafe state
18
New cards

Deadlock Detection

  • Algorithm to check for deadlocks and recover

Single instance resource
Use Wait-For graph
Several instances
Uses time-varying data structures found in Banker's Algorithm

19
New cards

Detection algorithm depends on what 2 factors?

  • How often is a deadlock likely to occur?
  • How many processes are affected by the deadlock?
20
New cards

What deadlock detection do we use for single instance resource?

Wait-for graph

21
New cards

What deadlock detection do we use for several instances?

Time-varying data structures found in Banker's Algorithm

22
New cards

Wait-For Graph

Graph showing processes waiting for resources held by others.

23
New cards

What happens when you terminate processes?

  • Abort all in deadlock
  • Partial termination but which one?
24
New cards

What happens when you preempt resources?

Consider victim and starvation

25
New cards

Hybrid Approach

  • Combined use of basic methods
  • Assumes resources are grouped
  • Diff. method for each group
26
New cards

Memory Management

  • Large array of words with unique address
  • Memory sharing
  • Memory management algorithms vary
27
New cards

Memory sharing

Several processes in main memory to improve utilization and response

28
New cards

Types of Memory Address

  1. Symbolic
  2. Relocatable
  3. Absolute
29
New cards

Symbolic Memory Address

Variables

30
New cards

Relocatable Memory Address

offset + 100

31
New cards

Absolute Memory Address

Physical addresses
Ex: 255, 254

32
New cards

3 Forms of Efficient Memory Use

  1. Dynamic loading
  2. Dynamic linking
  3. Overlays
33
New cards

Dynamic loading

  • Subroutines not loaded until called (unused until never loaded)
  • Saves space

EX. Code has math library included (# include math.h). When loaded, it doesn't load whole library, only when called by code

34
New cards

Dynamic linking

  • Uses stub as pointer to routine in DLL
  • May waste disk space
  • Not required to link actual module with program, just give dynamic link (reference) to compiler

EX. Windows has dynamic link library
EX. UNIX has shared objects (dynamic libraries)

35
New cards

Overlays

  • Keeps only needed instructions in memory

EX. Code w/ 1000+ lines, don't want to put all that into main memory. So get only needed instructions and keep those in memory

36
New cards

(Overlays) What do you do if no more space?

Overwrite previous instructions that are not needed anymore

37
New cards

Logical address are generated by while Physical Address are generated by _

Logical - CPU

Physical - Memory Management Unit (MMU)

38
New cards

Logical address

  • Not entirely occupied by physical memory

  • Abstract or change the view of physical address so virtual code can see addresses in their own way

39
New cards

Another name for logical address

Virtual address

40
New cards

Physical address

Physical location in memory hardware where data is stored

41
New cards

What determines how logical and physical addresses are connected? And what is needed for this?

OS

Mapping scheme requires hardware support (MMU) to convert to physical

42
New cards

Swapping

  • Used with insufficient memory
  • Needs a fast secondary memory for efficiency
43
New cards

Why does Roll out (swap out) and roll in (swap to MM) increase context switch time?

Because context-switching is between main memory and CPU so if you context switch AND swap, CPU relies on process being in main memory then it will increase switch time

44
New cards

What happens when a process is swapped out?

Process is removed from main memory and placed on secondary memory by the scheduler

45
New cards

Context Switch vs Swapping

Context Switch - CPU and main memory

Swapping - main memory & secondary memory

46
New cards

What are good candidates for swapping out?

Waiting processes

  • Get from Job queue
  • Check which processes are waiting the longest, swap them out then after some time, swap it back in
  • Doing this reduces their wait time
  • Swap is not permanent
47
New cards

What is the modified version of swapping used in Unix?

  • Swapping normally disabled
  • Enabled when memory runs out
48
New cards

What is the swapping in Microsoft Windows?

Partial swapping

  • Current program swapped out
  • User determines swap rather than scheduler
49
New cards

2 Types of Memory Allocation Methods

  1. Contiguous
  2. Non Contigious
50
New cards

Contiguous Memory Allocation

Process occupies a single continuous block

51
New cards

Contiguous Memory Allocation Algorithms

  1. Single-partition
  2. Multiple-partition
52
New cards

Single-partition

  • OS code either in high or low memory

  • Must protect from user codes (uses relocatable addressing or offset so it doesn't overlap given reserved memory addresses)

53
New cards

Types of Multiple-partition Algorithms

  1. Fixed
  2. Dynamic
54
New cards

Fixed (Multiple-partition Algorithms)

  • 1 partition = 1 process
  • Multiprogramming limited to number of partitions
55
New cards

Dynamic (Multiple-partition Algorithms)

  • Starts with one large hole (that part becomes free)
  • Processes arrive and are allocated a block
  • Holes become available as processes terminate
56
New cards

What are the algorithms for dynamic storage allocation problem?

  1. First-fit - get first hole big enough for process
  2. Best-fit - get smallest hole that is big enough
  3. Worst-fit - get largest hole
57
New cards

Which algorithm is the fastest?

First-fit

58
New cards

Which algorithm is best for storage utilization?

Best-fit

59
New cards

2 Types of Fragmentation

  1. Internal
  2. External
60
New cards

External Fragmentation

  • Free memory is scattered in small blocks
  • Enough memory exists but not contiguous
  • Allocation algorithm affects fragmentation
61
New cards

50% Rule in External Fragmentation

Given N allocated blocks, the next 0.5N blocks will be lost due to fragmentation (1/3 unusable memory)

62
New cards

Internal Fragmentation

Allocated memory has unused space within a block due to fixed-size allocation

63
New cards

How to solve external fragmentation?

Compaction (not always possible)

May be combined with swapping

64
New cards

Compaction Algorithms

  1. Move processes to one end
  2. Create big hole in middle
65
New cards

Types of Non-contiguous Allocation

  1. Paging
  2. Segmentation
  3. Segmentation with paging
66
New cards

Paging

  • Fixed-size blocks
  • Process address is broken down into pages
  • Page table does conversion
67
New cards

What is the size of the process based on? (paging)

Size of process is based on how many pages

68
New cards

What is the size of a frame?

Frame is same size as page to avoid external fragmentation

69
New cards

Pages translate into _?

Pages (logical) translate into frames (physical)

70
New cards

Physical Address

Actual location in memory hardware.

71
New cards

Fragmentation in Paging

  • Solves external fragmentation
  • Internal fragmentation is present 🡪 page may not be fully occupied
72
New cards

What is the worst case for fragmentation in paging?

n pages + 1 byte

Wasted: page size - 1 byte

73
New cards

Page Table Structure

  • Page table storage is system-specific
  • Most map one page table per process
  • More context-switch time bc need to store page tables in PCB
74
New cards

Shared Pages

  • Non-self-modifying code (read-only)
  • Only one copy is stored in memory but accessed by many
75
New cards

What happens to the access when you share a page?

  • Everyone can access that page to store data but you can't modify it, can only read
  • Similar to threads sharing common data within process
76
New cards

What are examples of shared pages? Where are they used?

compilers, editors

Used in time-sharing environment

77
New cards

Segmentation

  • Divides logical memory into segments
  • More intuitive view of memory (e.g. program divided into segments of different lengths)
  • Logical View of Segmentation
78
New cards

Segmentation Hardware

  • Two-dimensional to one-dimensional mapping
  • Segment table
  • Similar to paging except for variable size
  • Intel x86 based on segmentation
79
New cards

What does an entry in the segment table consist of?

segment base (starting address) and limit (segment length)

80
New cards

Segmentation with paging

  • Solves external fragmentation problem of pure segmentation

  • Each segment composed of several equal-sized pages

81
New cards

File System

  • Most visible aspect of OS
  • Provides mechanism for online storage and access to data and programs
  • Mapped on physical devices (usually nonvolatile)
82
New cards

File Basics: Attribute

Name
Type
Location
Size
Security
Date/Time
User ID

83
New cards

File Basics: Operations

  • Create

  • Write

  • Read

  • Reposition within file

  • Delete

  • Truncate (change contents of file without deleting it or changing its location)

84
New cards

File Basics: Types of Access Methods

  1. Direct
  2. Sequential
  3. Indexed
85
New cards

Access Methods: Direct

Direct access to a file block

86
New cards

Access Methods: Sequential

  • Contents accessed sequentially
  • Most basic way of file access
  • Used by compilers and editors
  • Pointer is made, which links to file's base address
87
New cards

In sequential access, what happens if user wishes to read the 1st word of file?

Pointer gives it to them and raises its value to next word

88
New cards

In sequential access, how is the data in the file evaluated?

In order that it appears in file and so it's easy to access a file's data

89
New cards

Access Methods: Indexed

  • Variant of direct access

  • Maintains an index w/ address

  • OS searches for address in index which points to the block

90
New cards

Directory Structure: How are file systems structured?

  • File system divided into partitions or volumes (PC and Mac)

  • Each partition contains info about files (directory, FAT, etc.)

91
New cards

Directory Structure: Types of Logical Schemes

  1. Single-level
  2. Two-level
  3. Tree-structured
  4. Acyclic graph
  5. General graph
92
New cards

Single-level Logical Scheme

  • Everything in one directory, easy to do
  • Since everything in one directory, no duplicate file names
93
New cards

In a Single-level Logical Scheme, if multiple users have access, how will you divide the files?

Prefixing file names with usernames to avoid name conflicts

  • many ways to do this, this is just 1 solution
94
New cards

Two-level Logical Scheme

  • Masterfile directory comprised of different users then each user has a file directory
  • Every directory comes from master file directory
95
New cards

True or False: In Two-level Logical Scheme, you can access files of other users

FALSE

  • Isolated file directories per user
  • Can't access files of other users
96
New cards

Tree-structured Logical Scheme

  • Different directories have sub directories

  • 1 parent per child

  • Root directory on top that spans to other directories

97
New cards

True or False: In a Tree-structured Logical Scheme, you can have subdirectories

TRUE

Can have files or subdirectories under

98
New cards

Acyclic graph Logical Scheme

  • Extension of tree structure but not as strict as tree
  • No cycles
  • Enables logical grouping and use of files across multiple locations
99
New cards

Which logical scheme permits shared subdirectories? And how does it do it?

Acyclic graph Logical Scheme

via Links

100
New cards

How do Acyclic graphs save space?

Users can share files without duplication