Operating Systems

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/122

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

123 Terms

1
New cards
Process
A program in execution that consists of program code text section current activity including PC and registers stack containing temporary data and heap for dynamic memory
2
New cards
Process State NEW
The process is being created
3
New cards
Process State READY
The process is waiting to be assigned to a processor
4
New cards
Process State RUNNING
Instructions are being executed
5
New cards
Process State WAITING
The process is waiting for some event to occur such as IO completion
6
New cards
Process State TERMINATED
The process has finished execution
7
New cards
PCB Process Control Block
Data structure containing process state process ID program counter CPU registers CPU scheduling info memory management info accounting info and IO status info
8
New cards
Context Switch
Switching the CPU from one process to another by saving the state of the old process and loading the saved state for the new process
9
New cards
Context Switch Time
Pure overhead where the system does no useful work while switching typically 1-10 microseconds
10
New cards
Thread
A basic unit of CPU utilization comprising a thread ID program counter register set and stack that shares code data and files with other threads in the same process
11
New cards
Thread Benefits
Responsiveness resource sharing economy cheaper than process creation and scalability can utilize multicore processors
12
New cards
Many-to-One Thread Model
Multiple user-level threads mapped to single kernel thread where one thread blocking causes all to block rarely used today
13
New cards
One-to-One Thread Model
Each user-level thread maps to a kernel thread providing more concurrency used by Windows and Linux
14
New cards
Many-to-Many Thread Model
Multiple user-level threads mapped to multiple kernel threads allowing flexible thread management
15
New cards
User Threads
Threads managed by user-level thread library without kernel support
16
New cards
Kernel Threads
Threads supported and managed directly by the operating system
17
New cards
IPC Interprocess Communication
Mechanism for processes to communicate and synchronize their actions using shared memory or message passing
18
New cards
Shared Memory IPC
Processes share a region of memory for communication which is fast but requires synchronization to avoid race conditions
19
New cards
Message Passing IPC
Processes communicate by exchanging messages via send and receive operations handled by the OS
20
New cards
Direct Communication
Processes must name each other explicitly using send P message and receive Q message
21
New cards
Indirect Communication
Messages are sent and received from mailboxes or ports using send A message and receive A message
22
New cards
Blocking Send
Sender is blocked until the message is received by the receiver or mailbox
23
New cards
Non-blocking Send
Sender sends the message and continues execution immediately
24
New cards
Blocking Receive
Receiver blocks until a message is available
25
New cards
Non-blocking Receive
Receiver retrieves either a valid message or null and continues
26
New cards
Critical Section
Segment of code where a process may change common variables update a table or write a file
27
New cards
Critical Section Problem
Design a protocol so that when one process is executing in its critical section no other process can execute in its critical section
28
New cards
Mutual Exclusion
If process Pi is executing in its critical section then no other processes can be executing in their critical sections
29
New cards
Progress Requirement
If no process is executing in its critical section and some processes wish to enter their critical section then the selection of the next process cannot be postponed indefinitely
30
New cards
Bounded Waiting
A bound must exist on the number of times other processes are allowed to enter their critical sections after a process has made a request to enter
31
New cards
Race Condition
Situation where several processes access and manipulate shared data concurrently and the outcome depends on the order of execution
32
New cards
Peterson's Solution
Software solution to critical section problem for two processes using shared variables flag array and turn variable
33
New cards
Memory Barrier
Instruction that forces any change in memory to be propagated made visible to all other processors
34
New cards
Test-and-Set Instruction
Hardware instruction that atomically tests and modifies the content of a word returning the original value and setting new value to true
35
New cards
Compare-and-Swap Instruction
Hardware instruction that atomically compares a value with an expected value and swaps with new value only if they match
36
New cards
Mutex Lock
Synchronization tool that provides mutual exclusion with acquire and release operations also called spinlock due to busy waiting
37
New cards
Semaphore
Integer variable accessed only through two atomic operations wait and signal used for process synchronization
38
New cards
Binary Semaphore
Semaphore with integer value that ranges only between 0 and 1 similar to mutex lock
39
New cards
Counting Semaphore
Semaphore with integer value that can range over an unrestricted domain
40
New cards
Wait Operation on Semaphore
Decrements the semaphore value and blocks the process if value becomes negative also called P operation
41
New cards
Signal Operation on Semaphore
Increments the semaphore value and wakes up a blocked process if any also called V operation
42
New cards
Monitor
High-level synchronization construct that provides mutual exclusion and condition variables for process synchronization
43
New cards
Condition Variable
Variable within a monitor that allows processes to wait for certain conditions using wait and signal operations
44
New cards
Bounded Buffer Problem
Synchronization problem with n buffers where producer produces items and consumer consumes items requiring semaphores empty full and mutex
45
New cards
Readers-Writers Problem
Synchronization problem where multiple readers can read simultaneously but writers need exclusive access using semaphores rw_mutex and mutex with read_count
46
New cards
Dining Philosophers Problem
Synchronization problem where 5 philosophers alternate between thinking and eating requiring 2 chopsticks to eat solved using semaphores or monitor
47
New cards
Deadlock
Situation where a set of processes are blocked because each process is holding a resource and waiting for another resource held by another process
48
New cards
Mutual Exclusion Deadlock Condition
Only one process at a time can use a resource
49
New cards
Hold and Wait Deadlock Condition
A process holding at least one resource is waiting to acquire additional resources held by other processes
50
New cards
No Preemption Deadlock Condition
A resource can be released only voluntarily by the process holding it after completing its task
51
New cards
Circular Wait Deadlock Condition
A set of waiting processes exists such that P0 waits for P1 P1 waits for P2 and Pn waits for P0
52
New cards
Resource Allocation Graph
Directed graph with vertices for processes and resources and edges for request and assignment used to detect deadlocks
53
New cards
Deadlock Prevention
Ensure that at least one of the four necessary deadlock conditions cannot hold by invalidating one condition
54
New cards
Deadlock Avoidance
System has additional a priori information about resource requests and uses algorithms to ensure system never enters unsafe state
55
New cards
Safe State
System state where there exists a sequence of all processes such that each process can obtain needed resources execute and terminate
56
New cards
Unsafe State
System state that may lead to deadlock but deadlock is not guaranteed
57
New cards
Banker's Algorithm
Deadlock avoidance algorithm that checks if granting a resource request leaves the system in a safe state before allocation
58
New cards
Available Vector
Vector of length m indicating number of available resources of each type
59
New cards
Max Matrix
n x m matrix where Max i j equals k means process Pi may request at most k instances of resource type Rj
60
New cards
Allocation Matrix
n x m matrix where Allocation i j equals k means Pi is currently allocated k instances of Rj
61
New cards
Need Matrix
n x m matrix where Need i j equals Max i j minus Allocation i j representing remaining resource needs
62
New cards
Safety Algorithm
Algorithm to determine if system is in safe state by finding a sequence where each process can finish
63
New cards
Resource Request Algorithm
Algorithm to determine if a resource request can be granted immediately by checking if request is less than need and available
64
New cards
Deadlock Detection
Allow system to enter deadlock state then use detection algorithm to identify deadlocked processes
65
New cards
Wait-for Graph
Simplified resource allocation graph for single instance resources where Pi to Pj means Pi is waiting for Pj
66
New cards
Deadlock Recovery Process Termination
Abort all deadlocked processes or abort one process at a time until deadlock cycle is eliminated
67
New cards
Deadlock Recovery Resource Preemption
Select a victim process rollback to safe state and restart considering cost to minimize impact
68
New cards
Logical Address
Address generated by the CPU also referred to as virtual address
69
New cards
Physical Address
Address seen by the memory unit the actual location in main memory
70
New cards
Memory Management Unit MMU
Hardware device that maps virtual addresses to physical addresses at runtime
71
New cards
Base Register
Contains the smallest physical address value for a process
72
New cards
Limit Register
Contains the range of logical addresses that must be less than the limit register for protection
73
New cards
Dynamic Loading
Routine is not loaded into memory until it is called providing better memory utilization
74
New cards
Dynamic Linking
Linking postponed until execution time using stubs to locate library routines useful for shared libraries
75
New cards
Contiguous Memory Allocation
Each process is contained in a single contiguous section of memory
76
New cards
External Fragmentation
Total memory space exists to satisfy a request but it is not contiguous
77
New cards
Internal Fragmentation
Allocated memory may be slightly larger than requested memory with unused space internal to a partition
78
New cards
First-Fit Allocation
Allocate the first hole that is big enough from the list of free holes
79
New cards
Best-Fit Allocation
Allocate the smallest hole that is big enough producing the smallest leftover hole
80
New cards
Worst-Fit Allocation
Allocate the largest hole producing the largest leftover hole
81
New cards
Compaction
Shuffle memory contents to place all free memory together in one large block to reduce external fragmentation
82
New cards
Paging
Memory management scheme that permits physical address space of a process to be noncontiguous dividing memory into fixed-sized frames
83
New cards
Frame
Fixed-sized block of physical memory typically 512 bytes to 16 MB
84
New cards
Page
Fixed-sized block of logical memory same size as a frame
85
New cards
Page Table
Data structure used by virtual memory system to store mapping between virtual addresses and physical addresses
86
New cards
Page Number
Part of logical address used as an index into the page table
87
New cards
Page Offset
Part of logical address combined with base address to define the physical memory address
88
New cards
TLB Translation Lookaside Buffer
Special fast-lookup hardware cache also called associative memory used to speed up address translation
89
New cards
Effective Access Time
Average time to access memory considering TLB hit ratio calculated as hit_ratio times TLB_time plus miss_ratio times memory_access_time
90
New cards
Hierarchical Paging
Page table is divided into multiple levels to reduce memory required for page tables
91
New cards
Hashed Page Table
Virtual page number is hashed into a page table with chains for collision resolution used for address spaces greater than 32 bits
92
New cards
Inverted Page Table
One entry for each real page of memory tracking which process owns each physical page reducing memory for page tables
93
New cards
Segmentation
Memory management scheme that supports user view of memory as a collection of variable-sized segments
94
New cards
Segment Table
Maps two-dimensional logical addresses to physical addresses with base and limit for each segment
95
New cards
Virtual Memory
Separation of logical memory from physical memory allowing execution of processes not completely in memory
96
New cards
Demand Paging
Pages are loaded into memory only when they are needed during execution reducing I/O and memory requirements
97
New cards
Page Fault
Interrupt that occurs when a process accesses a page not currently in memory
98
New cards
Valid-Invalid Bit
Bit in page table entry where v means page is in memory and i means page is not in memory
99
New cards
Page Fault Steps
Trap to OS check if valid reference find free frame swap page into frame reset page table restart instruction
100
New cards
Pure Demand Paging
Start process with no pages in memory causing page fault on first instruction

Explore top flashcards

Medical terma quiz 4
Updated 409d ago
flashcards Flashcards (44)
Skull
Updated 5h ago
flashcards Flashcards (47)
Integrals
Updated 665d ago
flashcards Flashcards (41)
Ch13-14 Civics
Updated 1034d ago
flashcards Flashcards (45)
List 35
Updated 1098d ago
flashcards Flashcards (35)
Medical terma quiz 4
Updated 409d ago
flashcards Flashcards (44)
Skull
Updated 5h ago
flashcards Flashcards (47)
Integrals
Updated 665d ago
flashcards Flashcards (41)
Ch13-14 Civics
Updated 1034d ago
flashcards Flashcards (45)
List 35
Updated 1098d ago
flashcards Flashcards (35)