CSCI51.01 LT2: Deadlocks, Memory Management, File Management, and Mass Storage Structures

0.0(0)
studied byStudied by 13 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/142

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

143 Terms

1
New cards
Deadlock
Indefinite blocking of processes waiting for resources.
2
New cards
Resource Contention
Competition among processes for limited resources.
3
New cards
Coffman Conditions
Four conditions necessary for deadlock occurrence.
4
New cards
Mutual Exclusion
Resource held in non-shareable mode by one process.
5
New cards
Hold and Wait
Process holds resources while waiting for more.
6
New cards
No Preemption
Resources can only be released voluntarily by processes.
7
New cards
Circular Wait
Processes form a cycle waiting for each other's resources.
8
New cards
Deadlock Prevention
Avoid deadlock by preventing necessary conditions from holding.
9
New cards

Deadlock Avoidance

Requires the system to have a priori information about the future resource needs of processes to make allocation decisions that avoid deadlock.

10
New cards

Deadlock Detection and Recovery

Allow deadlocks to occur, then detect them and take action to recover.

11
New cards
Ostrich Algorithm
Ignore deadlocks when they are rare and costly.
12
New cards
System Resource Allocation Graphs
Visual representation of resource allocation in a system.
13
New cards
Request Edge
Indicates a process requesting a resource in graphs.
14
New cards
Assignment Edge
Shows resource assigned to a process in graphs.
15
New cards
Wait-For Graph
Graph indicating processes waiting for resources held by others.
16
New cards
Cycle in Wait-For Graph
Indicates the presence of a deadlock.
17
New cards
Resource Allocation
Distribution of resources among processes in a system.
18
New cards
Starvation
Process indefinitely waiting for resources due to scheduling.
19
New cards
Total Ordering
Sequentially ordering resource types to prevent circular wait.
20
New cards
Resource Enumeration
Assigning a unique order to resource types.
21
New cards
Resource Utilization
Efficiency of resource usage in a system.
22
New cards
System Throughput
Rate of process completion in the system.
23
New cards
Hybrid Approaches
Combines methods for effective deadlock management.
24
New cards

Deadlock Avoidance

Requires the system to have a priori information about the future resource needs of processes to make allocation decisions that avoid deadlock.

25
New cards
Safe State
State where resources can be allocated without deadlock.
26
New cards
Unsafe State
State where deadlock may occur if resources allocated.
27
New cards
Resource Allocation Graph
Graph showing resource requests and allocations.
28
New cards

Request Edge

Represents future resource requests in allocation graph.

29
New cards
Assignment Edge
Indicates allocated resources in allocation graph.
30
New cards
Banker's Algorithm
Prevents deadlock by ensuring safe resource allocation.
31
New cards
Detection Algorithm
Identifies deadlocks through periodic system checks.
32
New cards
Wait-For Graph
Graph indicating processes waiting for resources.
33
New cards
Cycle in Graph
Indicates a deadlock in resource allocation.
34
New cards
Process Termination
Aborting processes to resolve deadlocks.
35
New cards
Resource Preemption
Taking resources from processes to resolve deadlocks.
36
New cards
Hybrid Approach
Combines multiple methods for deadlock management.
37
New cards
Critical Resources
Resources likely involved in deadlocks.
38
New cards
Dynamic Loading
Loads subroutines into memory only when called.
39
New cards
Dynamic Linking
Uses pointers to routines in Dynamic Link Libraries.
40
New cards
Overlays
Keeps necessary instructions in memory, overwriting others.
41
New cards
Logical Address
Address generated by CPU for program execution.
42
New cards
Physical Address
Actual memory address used by hardware.
43
New cards
Memory Management Unit (MMU)
Maps logical addresses to physical addresses.
44
New cards
Swapping
Moves processes between main memory and storage.
45
New cards
Roll Out
Swapping a process from memory to disk.
46
New cards
Roll Out
Moving a process to secondary storage.
47
New cards
Roll In
Moving a process back to main memory.
48
New cards
Context Switch Time
Time taken to switch between processes.
49
New cards
Fast Secondary Storage
Essential for efficient process swapping.
50
New cards
Candidates for Swapping
Waiting processes suitable for memory swapping.
51
New cards
Unix Swapping
Disabled normally, enabled when memory is low.
52
New cards
Windows Swapping
Partial swapping controlled by the user.
53
New cards
Contiguous Allocation
Single continuous memory block for each process.
54
New cards
Non-Contiguous Allocation
Memory scattered across multiple blocks for processes.
55
New cards
Single-Partition Allocation
OS resides in high or low memory.
56
New cards
Multiple-Partition Allocation
Fixed-sized partitions hold one process each.
57
New cards
Dynamic Partitions
Allocates blocks as processes arrive, creating holes.
58
New cards
First-Fit Algorithm
Allocates the first sufficiently large hole.
59
New cards
Best-Fit Algorithm
Allocates the smallest sufficient hole.
60
New cards
Worst-Fit Algorithm
Allocates the largest available hole.
61
New cards

External Fragmentation

Sufficient total memory exists, but it's not contiguous, so a process cannot be allocated memory.

62
New cards
50% Rule
Next 0.5N blocks lost to fragmentation.
63
New cards
Internal Fragmentation
Unused space within allocated memory partitions.
64
New cards
Compaction
Moves processes to eliminate external fragmentation.
65
New cards
Paging
Divides memory into fixed-size blocks called pages.
66
New cards
Segmentation
Divides memory into variable-sized segments.
67
New cards
Segmentation with Paging
Combines segmentation and paging features.
68
New cards
Page Table
Maps pages to frames in memory.
69
New cards
Shared Pages
Read-only code shared among processes.
70
New cards
Segment Table
Maps segments with base and limit addresses.
71
New cards
Dynamic Loading
Loads subroutines only when needed.
72
New cards
Dynamic Linking
Uses stubs as pointers to DLL routines.
73
New cards
Overlays
Keeps necessary instructions in memory only.
74
New cards
Logical Address
Generated by CPU, also called virtual address.
75
New cards
Physical Address
Generated by MMU, requires hardware mapping.
76
New cards
Swapping
Moves processes to secondary memory when needed.
77
New cards
Contiguous Allocation
Processes occupy continuous memory blocks.
78
New cards
Fragmentation
Inefficient memory use due to allocation methods.
79
New cards

External Fragmentation

Sufficient total memory exists, but it's not contiguous, so a process cannot be allocated memory.

80
New cards

Internal Fragmentation

Free space exists within an allocated partition that a process doesn't use.

81
New cards
Compaction
Moves processes to create contiguous memory blocks.
82
New cards
Non-Contiguous Allocation
Uses paging or segmentation for memory allocation.
83
New cards

Paging

Non-Contiguous Allocation Method whose disadvantage is that Internal fragmentation may occur if a page is not fully occupied.

84
New cards
Shared Pages
Non-self-modifying code accessed by multiple processes.
85
New cards
Segmentation
Divides logical memory into intuitive segments.
86
New cards

File System

Provides mechanism for online storage and access to both data and programs

87
New cards
File Attributes
Properties like name, type, and size.
88
New cards
File Operations
Actions like create, read, delete, and truncate.
89
New cards
Access Methods
Ways to access data: sequential, direct, indexed.
90
New cards
Directory Structure
Organizes files into partitions or volumes.
91
New cards
File Protection
Controls operations via permissions and access groups.
92
New cards
Linked Allocation
Files as linked lists of disk blocks.
93
New cards
FAT
Stores links between blocks in a table.
94
New cards

Indexed Allocation

Index block contains pointers to all file blocks

95
New cards
Advantages of Indexed Allocation
No external fragmentation, supports direct access.
96
New cards
Disadvantages of Indexed Allocation
Overhead of maintaining the index block.
97
New cards
Unix inode
Stores pointers and indirect references for files.
98
New cards
Bit Vector/Map
Bit array indicating block allocation status.
99
New cards
Advantages of Bit Vector
Simple implementation for free space management.
100
New cards
Disadvantages of Bit Vector
Inefficient if not kept in memory.