Operating Systems Final Exam

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/113

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

114 Terms

1
New cards

Demand Paging

A form of paging where pages are on disk and brought into memory when needed. Unless it is pure ________, the pages with initial code and data are in memory. Any time a page tries to be accessed that isn’t in memory, it is a page fault.

2
New cards

Swap Space

Portion of the disk reserved for page data modified by a program, such as the heap/stack pages

3
New cards

Temporal Locality

If a process accesses an item in memory, it will tend to reference the same item again soon.

4
New cards

Spatial Locality

If a process accesses an item in memory, it will tend to reference an adjacent item soon.

5
New cards

Effective Access Time (with Demand Paging)

(1-p)* memory access time + p * (page fault time)

p = page fault rate, where 0 is no page faults and 1 is only page faults

6
New cards

FIFO (Page Replacement)

Evicting the oldest page
Simple to implement, but doesn’t consider how frequently a page has been accessed, and can lead to increased disk access

7
New cards

OPT (MIN, Belady’s algorithm)

A theoretical algorithm that evicts the page that will be the latest page to be accessed.

8
New cards

LRU (Least Recently Used)

Evicts the page that hasn’t been used longer than the others. It is a good approximation of Belady’s algorithm, but fails if locality is low (which it usually isn’t in most programs).

Can be implemented with time stamps for each page or a linked list of pages, being expensive in time and space respectively.

9
New cards
<p>Belady’s Anomaly</p>

Belady’s Anomaly

When adding another memory frame leads to more page faults, especially with FIFO and sometimes with LRU

10
New cards

Additional Reference Bits Algorithm

8-bit algorithm that was used by Linux to approximate LRU quickly. With an 8 bit value, a page’s high bit is set to 1 on reference, and the value is shifted right on a regular interval or each memory access. During a page fault, the lowest numbered page is kicked out.

11
New cards

Second-Chance Algorithm

An algorithm for approximating LRU. Each frame is kept in a circular list and has a single bit associated with it. On reference, the bit is set to 1. On a page fault, the OS checks the reference bit of the next frame that has a page. If it’s 0, the page is replaced, set to 1, and the page is replaced. If it’s 1, the bit is set to 0 and the pointer goes to the next frame.

The worst case, although not that often, occurs when all pages in he list have a bit of 1.

12
New cards

Enhanced Second-Chance Algorithm

An approximation of LRU that uses a reference bit and a modify bit, with a circular list of the frames. The modify bit being 1 indicates that the page was modified. For (r,m):

(1,1) becomes (0,1)

(1,0) becomes (0,0)

(0,1) locks the page in memory until the completion of the I/O to write out the page, and becomes (0,0) once finished

(0,0) is replaced

13
New cards

Counting-Based Algorithms

LRU approximation algorithms that keep a counter of the number of references that have been made to each page. With Least Frequently Used, the page with the smallest count is replaced, although this might be a new page. With Most Frequently Used, the highest count is replaced.

14
New cards

Fixed Allocation

The initial allocation of pages/frames when programs start.

Equal Allocation - Each process gets an equal portion of the free frames, failing to deal with processes of different sizes (internal fragmentation)

Proportional/Variable Allocation - Frames are allocated based on process size. Allocation = process size / (all process’ sizes * total number of frames)

15
New cards

Global Allocation

For page replacement. A process selects a frame from the entire set of frames, meaning one process can take a frame from another. A single LRU algorithm is used for the entire set of pages.

It’s flexible, has greater throughput and flexible memory usage, making it more common.

However, due to lack of per-process isolation, thrashing can occur a lot, leading to varying execution time.

16
New cards

Thrashing

High amount of page replacements

17
New cards

Local Allocation

When during a page fault, frames are only selected from a process’s set of allocated frames. Therefore, the page fault of one process doesn’t have to affect another process. This can lead to unutilized memory (internal fragmentation) or a process not getting enough memory.

18
New cards

Working-Set Model

Model to give each process enough frames for its work set size, based on previous references.

The WSSi is the total number of pages referenced in the most recent Δ period for Pi. The total demand frames, D, is the sum of all of this variable, being an approximation of locality. If D > m, there will be thrashing that can be handled by suspending or swaping out a process.

Often, a sampling approach is used for approximation due to tracking being expensive. (working set determination)

19
New cards

Working Set

The set of all pages that a process has referenced in the past Δ seconds, Δ being an assigned number, only allowing a fixed number of page references.

20
New cards

Page-Fault Frequency-Based Model

Being more direct in allocation than the working set model, the number of frames allocated is put between an upper bound and lower bound of page faults. With high memory usage, it is below the lower bound, so the amount of frames becomes decreased. With high thrashing, it is above the upper bound, so the amount of frames becomes increased.

21
New cards

File

A logical unit of data on a disk, holding related information. Formally, it is an array of bytes that can be created, read, written, and deleted.

22
New cards

Filename

Regular name of a file for human readability.

23
New cards

i-number

Unique low-level name for each file. Called an i-node in Unix/Linux.

24
New cards

Directory

A special type of file that is a logical container to organize files and other instances of itself.

25
New cards

File System

Kernel software for managing files and directories, making it the manager of secondary storage.

26
New cards

Unstructured Approach

Approach to a file system with no restriction about how files are laid out, arbitrarily data. UNIX and Linux just implement files as a series of bytes.S

27
New cards

Structured Approach

File system approach used by old systems more often, having multiple rules and restrictions. The IBM mainframes implement files as a series of records or objects.

28
New cards

File Attribute

File Control Block: the metadata stored for a file, containing its name, identifier, type, location, size, protection, and its usage time/date. Stored in kernel memory space (in the real world, both storage and memory). Also i-node….? for Linux? And MFT entry for NTFS.

29
New cards

System-Wide Open File Table

A global file table shared by all processes, each entry being an FCB. It contains:

  • All file information currently open by all processes

  • File open count

  • File attributes

  • File location on disk

  • Pointers to locations of the files in memory

30
New cards

Per-Process File Table

Table containing a process’s open files. Contains a pointer to the corresponding file in the system-wide open file table, a current position in the file (offset), and permission info. The PCB has a pointer to the table.

31
New cards

File Lock

A mechanism that prevents multiple processes from accessing a file or part of a file concurrently.

Shared Lock - Multiple processes can acquire concurrently, complicated to implement and manage. Often used by database or caching systems

Exclusive Lock - Either mandatory (Windows), where only one process can access at a time, leading to possible deadlock or starvation, or advisory (Unix), where processes can coordinate file accession.

32
New cards

Direct/Random Read

When the OS reads an amount of decided bytes from the file position and puts it into the desired buffer

33
New cards

Sequential Read

When the OS reads a decided amount of bytes from the file position, increments the buffer position from the amount read, and changes the file position based on that.

34
New cards

Sequential Access

When data is processed in order, a byte or record at a time. It’s often used by compilers, editors, and many other programs, but it is inflexible with movement from one spot to another.

35
New cards

Direct Access

Accessing a file’s bytes based on key-values. The file is separated into fixed length logical records that can be read in any order, found by the addresses of their blocks. Used by Linux’s lseek.

36
New cards

Memory-Mapped File

A file that is mapped into main memory, removing the need for write/read calls. However, the file information can be lost while in memory if an error occurs.

37
New cards

Single-Level Directory

Directory structure with 1 directory for every single file, meaning every name is unique. Used by early computers due to small disk sizes.

38
New cards

Two-Level Directory

Directory structure where each user has a separate directory, but each file under a user must have a unique name.

39
New cards

Tree-Structure Directory

A directory structure where the only limitation to directory nesting is disk size. Directories have to be emptied to be deleted.

40
New cards

Hard Link

A new file that serves as another name for an already-existing file, sharing the same i-node. A file, then, can only be deleted when all of these connected to that file are deleted.

41
New cards

Soft/Symbolic Link

A “file” that is really just a pointer from one file to another. Has its own i-node. A file can be deleted before any of these are deleted, making these just broken.

42
New cards

Deadlock

Condition where 2+ threads/processes are waiting for an event that can only be generated by these same threads

43
New cards

Deadlock Detection

Finding instances of deadlock when threads stop making progress

44
New cards

Deadlock Prevention

Imposing restrictions on a program to forbid deadlock from possibly occurring in advance

45
New cards

Deadlock Avoidance

Using algorithms that check resource requests and availability at runtime to not let deadlock occur

46
New cards

Starvation

Thread waiting indefinitely for resources that are being used by other threads

47
New cards

Conditions for deadlock

Mutual exclusion, hold and wait, no preemption, circular wait

48
New cards

Mutual Exclusion (Deadlock)

A resource can only be used by one thread at a time. Deadlock can be prevented by only using sharable resources that don’t need to be locked.

49
New cards

Hold and Wait

At least one thread holds a resource and is also waiting for other resources to become available. Deadlock can be prevented by either having every process have all of its necessary resources before executing, eliminating the need to wait; or by only allowing each process to request a resource when it has none, eliminating the ability to hold.

50
New cards

No preemption

A thread can only release a resource voluntarily, cannot be forced to by the OS or another thread. Deadlock can be prevented by having a process release its resources if it cannot obtain the resources it wants, or having the desired resources preempted from other waiting processes.

51
New cards

Circular Wait

When multiple threads are waiting for each other’s resource, such that the thread that has resource 1 is waiting for resource 2, the thread that has resource 2 is waiting for resource 3, etc until it loops.
Deadlock can be prevented by making it so that, assigning each resource a number, if thread has resource i, then it cannot request resources with a value less than i.

52
New cards

Resource-Allocation Graph (RAG)

A graph used for deadlock detection and avoidance. It has request edges that go from a thread to a resource, indicating that a thread has requested a resource, and assignment edges that go from a resource to a thread, indicating that the resource has been allocated to the thread. If there are no cycles, there is no deadlock.

With deadlock avoidance, it additionally has the claim edge, indicating that a thread might request a resource but not right now. Requests can only be granted if a cycle isn’t formed by doing so.

53
New cards

Banker’s Algorithm

Deadlock avoidance algorithm used for resources with multiple instances. N threads, M resource types. Each thread declares the maximum number of resources it will need, and the objective is to find a safe sequence of resource allocation.

Can also be used with deadlock recovery, looking at current requests against available resources instead of future need against available resources.

54
New cards

Deadlock Recovery

Getting out of deadlock after it already happens. 

55
New cards

Resource Preemption (Deadlock Recovery)

Preempting some resources from processes and giving them to other processes until deadlock is over.

Includes victim selection and rollback, can lead to starvation.

56
New cards

Physical Address

A real address in memory, seen by the memory unit

57
New cards

Logical/Virtual Address

An address relative to the start of the processor’s address space. Generated by the CPU.

58
New cards

Address Binding

Assignment of a program’s address (and ways of doing it)

59
New cards

Compile Time Address Binding

The compiler generates the exact physical location in memory of an address for a process

60
New cards

Loading Time Address Binding

The compiler generates an address, but at load time, the loader and linker determine the process’s starting position.

61
New cards

Execution Time Address Binding

The compiler generates an address, and the OS can place it anywhere it wants. When a program calls an address space, the OS needs to translate it from logical to physical through hardware support.

62
New cards

Memory-Management Unit (MMU)

Hardware device that maps logical addresses to physical addresses at run time. It uses a relocation register that is added to every address generated by a user process at the time it is sent to memory.

63
New cards

Contiguous Allocation (Memory)

When segments of memory are allocated to a process so that all of the process is together.

64
New cards

Hole

A block of available memory.

65
New cards

First-Fit Memory Allocation

Allocate a process’s memory to the first large enough hole

66
New cards

Best-Fit Memory Allocation

Allocate the smallest hole that is big enough for a process

67
New cards

Worst-Fit Memory Allocation

Allocate the largest hole that is big enough for a process

68
New cards

External Fragmentation

When there is enough unallocated memory in total to fit a process, but it is not contiguous.

69
New cards

Compaction

Mass relocation of processes to remove external fragmentation, can be done during swapping.

70
New cards

Internal Fragmentation

When allocated memory is slightly larger than requested memory and not being used

71
New cards

Paging

A manner of addressing external fragmentation, dividing up physical memory into fixed-sized blocks called frames and logical memory into blocks of the same size called pages. These pages are located contiguously, but their frames are not.

72
New cards

Page Table

Table to translate a logical address to a physical address with paging. (See notes for address translation).

73
New cards

Page-Table Base Register (PTBR)

A register that points to the page table in memory

74
New cards

Page-Table Length Register (PTLR)

A register that indicates the size of the page table with x86 and ARM

75
New cards

L1 cache

Uses Translation Look-Aside Buffers (TLBs), associative memory, to make page table accessing quicker. It stores page numbers (key) and frames (value). TLBs are erased/flushed every context switch, since the TLB is associated with only one process due to its size.

76
New cards

Effective Access Time (EAT)

Without a TLB, it is equal to 2 times the memory access time (accessing page table and regular memory access)

With a TLB, it is equal to (TLB access time + memory access time) * TLB miss rate + (TLB access time + 2 * memory access time)(1-TLB miss rate)

Or, if M = memory access time, T = TLB access time, and P = TLB miss rate

= (T+M)P + (T+2M)(1-P)

77
New cards

Valid/Invalid Bit

Bit used by page tables and TLB, where one value indicates that an associated page is in the processs logical address space, and the other value indicates that it is not or can’t be confirmed.

78
New cards

Process Arrival with TLB

  1. User creates process that requires N pages

  2. If N frames are available, the OS allocates these frames to the pages

  3. Each page is put in a frame and its frame number is put in the corresponding entry in the page table

  4. TLB is flushed at the beginning of the context switch, marking valid-invalid bits as invalid

  5. OS starts process

  6. OS loads TLB entires as each page is accessed with the process, replacing an existing entry if the TLB is full

79
New cards

PCB with paging

Includes Page Table Register and possible copy of TLB

80
New cards

Context Switch with TLB

  1. Copy PTBR to PCB

  2. Copy TLB to PCB (optional)

  3. Flush TLB (mark all invalid)

  4. Restore PTBR for new process

  5. Restore the TLB for the new process if it was saved

81
New cards

Contiguous Allocation (Disk)

The OS allocates a contiguous chunk of free blocks when it creates a file, the FCB having start location and length. It has similar problems to the same term when talking about memory management, such as when a file changes its size. However, it is easy to implement and head movement is minimized in sequential access, better for a HDD.

82
New cards

Linked Allocation

Each file is allocated a linked list of blocks that are scattered through the disk, with the FCB having a pointed to the first and end block, and each block having a pointer to the next one.

This is better for SSD, having no external fragmentation, file size change, and efficient support for sequential access.

However, there is no random access with a linked list, there is wasted space due to pointers, and it leads to many seek operations, being bad for an HDD.

83
New cards

Indexed Allocation

When a file is allocated multiple scattered blocks, including an index block that has pointers to all other blocks.

It supports random access, has no external fragmentation, and has no real size limit.

The FCB, however, becomes harder to design, and the index block itself is desired to be kept small.

84
New cards

Linked-List Index Blocks

If a file grows out its index files, the OS creates another overflow index block and links it to the last one.

85
New cards

Multi-Level Indexed Allocation

Multiple levels of index blocks that each have links to another level, the last level linking to files, allowing for file size growth. Inefficient with random access of large files, being bad for HDD (lot of seek operations)

86
New cards

Combined Scheme

Modern approach of index blocks that combines the linked list indexed allocation with the multi-level indexed allocation. In EXT there are direct blocks that point to the data, single indirect blocks that link to a level that points to the data, double indirect blocks that link to a two-level system that points to the data, and triple indirect blocks that link to a three-level system that points to the data. As the levels increase, the available space increases.

87
New cards

Super Block

The first block in an EXT-like disk structure, describing the number of i-nodes and data blocks in the file system, as well as where the i-node block begins; info about the file system.

88
New cards

i-node Bitmap

Bitmap of which i-node blocks are free, the 2nd block in an EXT-like system.

89
New cards

Data Bitmap

Bitmap of which data blocks are free, the 3rd block in an EXT-like system.

90
New cards

Steps to Create /foo/bar

  1. Read root i-node

  2. Read root data to find i-node number for foo

  3. Read i-node number for foo

  4. Read foo data to see if a block is available

  5. Read i-node bitmap for bar to find an available i-node block

  6. Write i-node bitmap

  7. Write to foo data block (i-node number for bar)

  8. Read i-node for bar to see if it is ok

  9. Write i-node for bar to update it

  10. Write i-node for foo (foo data block id)

91
New cards

Steps to Write to /foo/bar

  1. Read i-node for /foo/bar

  2. Reada databitmap to find available block

  3. Write to (update) data bitmap flag for data block

  4. Write data to /foo/bar data block

  5. Write /foo/bar i-node block to update access time

92
New cards

Steps to Open /foo/bar (if not accessed recently)

  1. Read root i-node to find root data

  2. Read root data to find foo i-node

  3. Read foo i-node to find foo data

  4. Read foo data to find bar i-node

  5. Read bar i-node

93
New cards

Read foo/bar if accessed recently

  1. Read bar i-node to get bar data

  2. Read bar data

  3. Write to bar i-node to indicate it’s been accessed

94
New cards

Bitmap / Bit Vector of Free Space

A bitmap with one bit for each block on the disk, 1 indicating that a block is free. It’s easy to implement, but suffers with larger storages.

95
New cards

Linked List of Free Space

All free blocks on the disk are held in a linked list, with the head of the list cached in kernel memory. It’s easy to implement and easy to add or remove free blocks, but pointer management leads to overhead and traversal is inefficient.

96
New cards

Crash Consistency

A disk data structure going back to the previous consistent state or being correctly updated after a crash during a write operation.

97
New cards

Crash Consistent Scenarios

Just the data block is written to the disk, and the i-node and bitmap are not updated

The i-node and bitmap are updated, but the data block is not written to the disk.

98
New cards

Crash Inconsistent Scenarios

Either the i-node or the bitmap are updated, but not both. Data block may or may not be updated.

99
New cards

Journaling

Method of creating crash consistency: when updating the disk, create a log of what’s supposed to happen. If a crash occurs during the actual disk write, the OS can fix it based on the log.

Has a TxB and TxE block to indicate the beginning and end of the log.

100
New cards

Data Journaling (Basic)

  1. Journal write the TxB, i-node, bitmap, data block, and TxE blocks all at once.

  2. Checkpoint - Update the actual disk structures

Explore top flashcards

Medical terma quiz 4
Updated 409d ago
flashcards Flashcards (44)
Skull
Updated 5h ago
flashcards Flashcards (47)
Integrals
Updated 665d ago
flashcards Flashcards (41)
Ch13-14 Civics
Updated 1034d ago
flashcards Flashcards (45)
List 35
Updated 1098d ago
flashcards Flashcards (35)
Medical terma quiz 4
Updated 409d ago
flashcards Flashcards (44)
Skull
Updated 5h ago
flashcards Flashcards (47)
Integrals
Updated 665d ago
flashcards Flashcards (41)
Ch13-14 Civics
Updated 1034d ago
flashcards Flashcards (45)
List 35
Updated 1098d ago
flashcards Flashcards (35)