CS 370 Final Review: Deadlocks, Memory, and Security

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/99

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

100 Terms

1
New cards

Deadlock

A situation where a set of processes are unable to proceed because each is waiting for resources held by another.

2
New cards

Mutual Exclusion

Only one process can use a resource at a time.

3
New cards

Hold and Wait

Processes holding resources can request additional resources.

4
New cards

No Preemption

Resources cannot be forcibly taken from a process; they must be released voluntarily.

5
New cards

Circular Wait

A set of processes exists where each process is waiting for a resource held by the next process in the chain.

6
New cards

Prevention

Ensure at least one of the four necessary conditions cannot occur, e.g., prevent circular wait by imposing an ordering on resource allocation.

7
New cards

Avoidance

Use algorithms like the Banker's Algorithm to determine safe states.

8
New cards

Detection and Recovery

Allow deadlocks to occur but detect them using techniques like a wait-for graph and recover by terminating processes or preempting resources.

9
New cards

Safe State

A system is in a safe state if it can allocate resources to all processes in a sequence such that deadlock is avoided.

10
New cards

Banker's Algorithm

It determines whether the system can remain in a safe state after granting a resource request.

11
New cards

Wait-for Graph

A graph used for deadlock detection in systems where each resource type has a single instance.

12
New cards

Process Termination

Terminate one or more processes involved in the deadlock.

13
New cards

Resource Preemption

Forcefully take resources from one process and allocate them to another.

14
New cards

Base Register

Holds the smallest legal physical memory address.

15
New cards

Limit Register

Specifies the size of the range of legal addresses.

16
New cards

Compile Time

Addresses are determined during compilation.

17
New cards

Load Time

Binding occurs when the program is loaded into memory.

18
New cards

Execution Time

Binding happens dynamically during execution, allowing relocation.

19
New cards

Physical Address

Addresses generated after translation by the MMU.

20
New cards

Memory Management Unit (MMU)

The hardware device that maps virtual to physical addresses.

21
New cards

First Fit

Allocate the first hole large enough.

22
New cards

Best Fit

Allocate the smallest hole large enough.

23
New cards

Worst Fit

Allocate the largest available hole.

24
New cards

Internal Fragmentation

Wasted space within allocated memory blocks.

25
New cards

External Fragmentation

Wasted space between allocated blocks.

26
New cards

Page number and page offset

The two parts of an address generated by the CPU.

27
New cards

Page table entry

Contains frame number (physical address of the page) and control bits (e.g., valid/invalid, read/write permissions).

28
New cards

Fragmentation in paging systems

False. Paging eliminates external fragmentation but may suffer from internal fragmentation.

29
New cards

TLB Miss

Describes when a page number is not present in the TLB.

30
New cards

Page size calculation

If a page offset is 13 bits, the page size is 2^13 bytes = 8 KB.

31
New cards

Two-level page table entries

With a 20-bit page number, there are 2^20 = 1,048,576 entries.

32
New cards

Alternative to hierarchical paging

Inverted Page Table or Segmentation.

33
New cards

IA-32 address translation

True. IA-32 uses a combination of paging and segmentation.

34
New cards

IA-64 addressing bits usage

False. Only a subset of the 64 bits is typically used.

35
New cards

Components of a 32-bit ARM address

Page number, page table index, and page offset.

36
New cards

Virtual memory execution

True. Virtual memory allows programs to execute even if only part of their code or data is in memory.

37
New cards

Physical vs Virtual address space

False. Virtual address space can be much larger than physical memory, thanks to virtual memory.

38
New cards

Page fault occurrence

When a process attempts to access a page that is not currently in memory.

39
New cards

Pure demand-paged system behavior

True. Pure demand paging fetches pages only when a page fault occurs.

40
New cards

System call for copy on write

fork(). When a child process is created, pages are shared until either process writes to a page.

41
New cards

Simplest page replacement algorithm

FIFO (First In, First Out).

42
New cards

Optimal Page Replacement Algorithm

Replaces the page that will not be used for the longest period of time.

43
New cards

Least Recently Used (LRU)

Page replacement algorithm that could be implemented using a stack or counters.

44
New cards

Approximation algorithms in LRU

True. LRU is complex to implement directly, so approximations like the clock algorithm are used.

45
New cards

Global vs Local page replacement

Global Replacement: Pages can be replaced from any process. Local Replacement: Pages are replaced only within the faulting process.

46
New cards

Thrashing

Situation where a process spends more time paging than executing.

47
New cards

Working Set

Set of pages a process is currently referencing.

48
New cards

Page fault rate with pure demand paging

True. The page fault rate decreases as more pages are brought into memory.

49
New cards

Shared memory implementation

False. Shared memory is often implemented using memory mapping.

50
New cards

TLB reach calculation

32×4 KB=128 KB.

51
New cards

Typical page size

True. 4 KB is a typical page size.

52
New cards

Support for large page sizes

True. Many systems support large page sizes for performance reasons.

53
New cards

Magnetic disks

Provide almost all secondary storage for modern computer systems.

54
New cards

Solid-state disks (SSDs)

Have different performance characteristics, including no moving parts and faster access times.

55
New cards

Sector

The smallest unit of transfer between a disk.

56
New cards

Sequential Access

One of the two ways a computer can access disk storage.

57
New cards

Direct Access

One of the two ways a computer can access disk storage.

58
New cards

FCFS (First Come First Serve)

One of the three general disk scheduling algorithms.

59
New cards

SSTF (Shortest Seek Time First)

One of the three general disk scheduling algorithms.

60
New cards

SCAN (Elevator Algorithm)

One of the three general disk scheduling algorithms.

61
New cards

Formatting

What must be done to a disk before it can be used for storage, creating a file system on the disk.

62
New cards

Master Boot Record (MBR)

Where the Windows system places its boot code.

63
New cards

RAID

Improves performance, reliability, and redundancy.

64
New cards

Parity

Helps reconstruct a corrupted block by storing information about data redundancy.

65
New cards

Redundancy in HDFS

Ensures that multiple copies of data are stored across different nodes for fault tolerance.

66
New cards

File attributes

Name, size, type, permissions, owner, and timestamps (creation, modification, access).

67
New cards

File operations

Create, read, write, delete, open, close, append.

68
New cards

Volume

A logical storage unit that may span physical disks.

69
New cards

Hierarchical Directory Structure

The most common way of structuring directories.

70
New cards

Acyclic-graph directories

Supported by UNIX systems using symbolic links.

71
New cards

Complete path name

/home/programs when the device jane/programs is mounted on the mount point /home.

72
New cards

NFS (Network File System)

One type of remote file system.

73
New cards

User classifications

Owner, group, and others in connection with each file.

74
New cards

Ext4

The name of the standard Linux file system.

75
New cards

Inode

The UNIX term for a file control block.

76
New cards

Virtual File System (VFS)

What the acronym VFS refers to.

77
New cards

Directory implementation approaches

Linear List: Simple but slow; Hash Table: Fast lookup with some complexity.

78
New cards

Disk block allocation methods

Contiguous Allocation: Blocks are stored together sequentially; Linked Allocation: Blocks are linked via pointers.

79
New cards

Bit vector example

Appears as 01011001 if blocks 0, 3, 4, and 6 were free.

80
New cards

HDFS

Designed for distributed storage, fault tolerance, and handling large files across clusters.

81
New cards

MapReduce

Provides a framework for processing and generating large datasets in parallel.

82
New cards

Full Virtualization

Simulates hardware entirely.

83
New cards

Paravirtualization

Optimized for virtualization by modifying the guest OS.

84
New cards

Containers, Jails, Zones, Runtime Environments

They share the host OS kernel and lack full isolation.

85
New cards

Memory mapping in a virtualized system

Maps guest virtual addresses to host physical memory via a Virtual Machine Monitor (VMM).

86
New cards

Binary translation

Used by VMMs on CPUs lacking support to replace sensitive instructions.

87
New cards

Live migration

Possible in virtual environments due to hardware abstraction, allowing seamless transfer of VMs.

88
New cards

Containers implementation

Use OS-level virtualization, sharing the host OS kernel.

89
New cards

Advantages of containers

Lightweight, faster deployment, and reduced resource overhead.

90
New cards

Data centers usage

Increasingly used for scalability, efficiency, and to support modern cloud-based applications.

91
New cards

Social Engineering

The most common technique for security attacks, involving tricking individuals into divulging confidential information.

92
New cards

Physical Level security measures

Protect physical access to systems (e.g., locks, guards).

93
New cards

Human Level security measures

Ensure security practices among personnel (e.g., training, awareness).

94
New cards

Operating System Level security measures

Implement OS-based protections (e.g., access control, authentication).

95
New cards

Network Level security measures

Secure data transmission (e.g., firewalls, encryption).

96
New cards

Trojan Horse

A program that appears legitimate but performs malicious actions.

97
New cards

Buffer Overflow

Exploiting a program by overflowing its memory buffer to execute arbitrary code.

98
New cards

Worms

Self-replicating programs that spread across systems and networks without user intervention.

99
New cards

Password Authentication

The most common approach for authenticating a user, where users provide a password to verify their identity.

100
New cards

Firewall

Monitors and controls incoming and outgoing network traffic based on predetermined security rules.