Operating System Final

0.0(0)
studied byStudied by 1 person
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/166

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 11:21 PM on 4/30/23
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

167 Terms

1
New cards
Deadlock
when there is a set of processes each holding resources and waiting to acquire more resources
2
New cards
Resource allocation graph
shows instantaneous state of a system and its resources
3
New cards
Request edge
directed edge from process to resource
4
New cards
Assignment edge
directed edge from resource instance to process
5
New cards
Deadlock prevention
prohibit at least 1 of the necessary deadlock conditions so deadlock doesn’t happen
6
New cards
Deadlock detection
let deadlocks happen, but provide support for getting out of deadlock
7
New cards
Deadlock avoidance
watch and regulate process behavior so they never cause a deadlock
8
New cards
4 necessary deadlock conditions:
mutual exclusion, hold and wait, no preemption, circular wait
9
New cards
Mutual Exclusion
resources can only be used by one process at a time. Prevent by giving up critical sections (bad)
10
New cards
Hold and wait
processes can have some resources and may ask for more. Prevent by having processes ask for all resources at startup (could lead to starvation)
11
New cards
no preemption
OS gets a resource back only when a process is done using it. Prevent by killing processes and taking their resources (bad)
12
New cards
circular wait
build a cycle of processes such that they are all waiting on resources held by another process. Prevent by imposing a total ordering on all resources (doable)
13
New cards
Claim edge
from process to resource indicating the process may ask for resource in the future, turns to request edge when process asks for resource in future
14
New cards
Safe state
processes can’t cause deadlock no matter what they do
15
New cards
Unsafe state
not necessarily deadlocked but processes could create a deadlock, whether a deadlock happens or not is out of the OS’s control
16
New cards
Banker’s algorithm
check if granting request would keep graph in a safe state before granting the request
17
New cards
Memory
large linearly addressable array of words with 2 regions: the OS and user processes
18
New cards
Address binding
how to choose physical memory addresses where the program will run
19
New cards
Compile time
compiler chooses an address when it builds the program, builds absolute code that must be run in a particular location and rebuilt to run elsewhere
20
New cards
Load time
compiler chooses memory address when program starts running, builds relocatable code that will run wherever but cannot be moved after the program starts
21
New cards
Execution time
compiler chooses memory address when program executes. Can move after program has started running
22
New cards
Static linking
do all linking at build time, application code and libraries linked to one image that is copied into main mem and run
23
New cards
Dynamic linking
link with library code at load time, smaller executable image
24
New cards
Stub
small code piece that finds and replaces itself with a subroutine, useful for libraries
25
New cards
Dynamic loading
load parts of program when they are first needed
26
New cards
overlays
same region of memory used for several subroutines, one after another
27
New cards
Swapping
When main mem is filling up, temporarily remove a process from memory into the backing store, large transfer cost
28
New cards
Backing store
fast large storage used to hold mem contents for processes not inside main mem
29
New cards
Contiguous allocation
all memory for a process must be in one continuous block
30
New cards
Multiple partition allocation
OS has to find room for multiple processes as they arrive and leave
31
New cards
Dynamic storage allocation
To find space for processes, OS must maintain list of allocated regions and list of free holes, must coalesce memory when it is freed
32
New cards
first fit
choose first hole that the block fits into
33
New cards
best fit
find smallest hole that is at east the size of the block
34
New cards
next fit
find next sufficiently large hole after the one last allocated
35
New cards
worst fit
find largest hole and place block there
36
New cards
External fragmentation
small gaps between allocated memory blocks that can’t fit a process, holes too small to use
37
New cards
Internal fragmentation
when a process is given more space than it needs so some of the space it is given is wasted
38
New cards
50% rule
with first fit, if n bytes have been allocated 1/2n will be lost to fragmentation
39
New cards
Compaction
making little fragments into a big fragment, can only do this with execution time address binding
40
New cards
Logical address
what the user program sees (aka virtual address)
41
New cards
Physical address
address meaningful to hardware
42
New cards
Address translation
run-time conversion from logical to physical address, do not need logical/physical distinction for compile or load time address binding
43
New cards
Memory management unit
hardware device that handles mapping logical to physical addresses, part of CPU
44
New cards
Pages
memory allocated as multiple fixed-sized blocks
45
New cards
page table
table that says where process pages are, array of frame numbers indexed by page
46
New cards
Page table base register
holds address of where current page table is located
47
New cards
Copy on write
instead of copying over all of parent memory, only the page table is copied, which reference the parents memory. At first, all pages are set to read-only, only the pages we need to modify are actually copied
48
New cards
Translation look-aside buffer (TLB)
special cache for each CPU core, maps page number straight to frame number to reduce overhead
49
New cards
Page table length register
tells hardware the length of the current page table
50
New cards
Hierarchical page tables
paging the page table to break it up into page sized section, created inner and outer page table
51
New cards
Hashed page table
store sparse mapping from page number to frame number, hash page number to integer, page table liner will have chain of records to deal with collisions
52
New cards
Inverted page table
store address translation by memory frame instead of by process page, decreased storage overhead, more complicated lookup
53
New cards
Demand paging
move seldom used pages into the backing store
54
New cards
page fault
when process tries to access page but it is in backing store, traps to kernel
55
New cards
Pure demand paging
only load in pages when they are first used
56
New cards
Victim page
page to page out when we need to bring a page in
57
New cards
Local replacement policy
victim can only be a process’s own pages
58
New cards
Global replacement policy
any page of any process can be chosen as the victim, will let thrashing spread from one process to others
59
New cards
page replacement algorithm
rule for choosing victim
60
New cards
first in first out
choose victim that has been around the longest
61
New cards
Belady’s anomaly
when more frames yields a higher page fault rate
62
New cards
stack algorithm
immune to belady’s anomaly, having more frames will never choose to throw out a page that would have remained with fewer frames
63
New cards
Optimal page replacement
victim is page that wont be used for the longest, requires knowledge of the future
64
New cards
last recently used
victim is page that wont be used for the longest time, implement timestamp on every ref
65
New cards
last recently used approximation
use reference bit to see which pages are being used
66
New cards
second chance algorithm
let page stay in memory if reference bit is set, clear bit and more to the next potential victim (aka clock algorithm)
67
New cards
Least frequently used
count number of references to each page, replace page with smallest count
68
New cards
enhanced second chance
use both reference and dirty bits, dirty bit tells which pages are more expensive to replace
69
New cards
Page buffering
keep free frames available so that when page fault occurs you can use empty page to bring in faulted page then write out idle page to maintain pool of free frames
70
New cards
minor page fault
when a spare frame contains a page of a process so you don’t need I/O
71
New cards
thrashing
increased page fault rates, system isn’t getting much work done because it is sending more time resolving page faults
72
New cards
proportional allocation
give each process a number of frames proportional to its total memory size
73
New cards
working set
set of pages process is actively using right now
74
New cards
Active list
part of linux 2.4, pages not candidates for replacement
75
New cards
inactive list
part of linux 2.4, pages that make good victims
76
New cards
clean list
part of linux 2.4, pages identical to that in the backing store
77
New cards
dirty list
part of linux 2.4, pages that are not identical to backing store or has been modified
78
New cards
CPU
few processing elements that are independently controllable, lost of cache, run few threads each doing their own thing, acts as host
79
New cards
GPU
lost of processors that are each less powerful and made to do the same thing, little cache, run lots of threads, acts as device
80
New cards
Block
how GPU is organized, a set of many threads that can share memory between them
81
New cards
Grid
a collection of blocks
82
New cards
__global__
run on device, call from host
83
New cards
kernel
block of code used by thread to be run in parallel
84
New cards
CUDA - call to allocate memory
cudaMalloc((void \*\*)&devList, count \*sizeof(int))
85
New cards
CUDA- call to copy memory
cudaMemcpy(devList, myList, count \* sizeof(int), cudaMemcpyHostToDevice)
86
New cards
CUDA- call to free
cudaFree(devResults)
87
New cards
CUDA- call to reset
cudaDeviceReset()
88
New cards
centralized computing
when all processes exist in OS memory, hard to cooperate with other operating systems
89
New cards
distributed computing
allows for resource sharing, computational speedup, reliability, communication and cost effectiveness
90
New cards
Data migration
distributed computing goal where data can be moved to the host using them
91
New cards
Computational migration
execute computation steps/subroutines on the host with the needed resources, run subroutines remotely and keep everything else on host
92
New cards
Process migration
move entire program after it has started running for load balancing or data access
93
New cards
Goals of distributed computing
Scalability, Robustness (tolerate failure), availability, heterogeneity (different types of hosts and devices can cooperate)
94
New cards
Network operating system
use network to access remote resources, often requires special effort
95
New cards
Local Area Network
path between hosts is obvious, hosts have consistent performance characteristics, network has predictable structure, straightforward routing, limits on failures
96
New cards
Wide area network
hosts spread out with irregular networking patterns, obstacles such as routing choices, failed connections, and congestion
97
New cards
Physical layer
what network looks like at a low level, says how you would encode bits and how fast to transmit ex. ethernet
98
New cards
data link layer
organize data into frames to let systems share connection, supports local routing decisions, corrects physical layer errors with the header, delivery of frames in a LAN
99
New cards
Network layer
handles non-trivial routing, organize data into variable length packets, adds additional header info ex. IP
100
New cards
Internet Protocol
best-effort packet delivery service. Each host given unique 32 bit address and each packet is stamped with source and destination address, congestion can cause packets to be dropped/duplicated, no error detection, no congestion control