1/193
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
virtualization
Provide an application with the illusion of resources that are not physically present.
reliability
A property of a system that does exactly what it is designed to do.
availability
The percentage of time that a system is usable.
portability
The ability of software to work across multiple hardware platforms.
application programming interface (API)
The system call interface provided by an operating system to applications.
overhead
The added resource cost of implementing an abstraction versus using the underlying hardware resources directly
response time
The time for a task to complete, from when it starts until it is done.
throughput
The rate at which a group of tasks are completed.
performance predictability
Whether a system's response time or other performance metric is consistent over time.
time-sharing operating system
An operating system designed to support interactive use of the computer.
protection
The isolation of potentially misbehaving applications and users so that they do not corrupt other applications or the operating system itself.
operating system kernel
The kernel is the lowest level of software running on the system, with full access to all of the capabilities of the hardware.
process
The execution of an application program with restricted rights — the abstraction for protection provided by the operating system kernel
executable image
File containing a sequence of machine instructions and initial data values for a program
process control block
A data structure that stores all the information the operating system needs about a particular process
dual mode operation
Hardware processor that has (at least) two privilege levels: one for executing the kernel with complete access to the capabilities of the hardware and a second for executing user code with restricted rights.
privileged instruction
Instruction available in kernel mode but not in user mode.
hardware timer
A hardware device that can cause a processor interrupt after some delay, either in time or in instructions executed.
interrupt
An asynchronous signal to the processor that some external event has occurred that may require its attention
interrupt handler
A kernel procedure invoked when an interrupt occurs
TOCTOU
A security vulnerability arising when an application can modify the user memory holding a system call parameter (such as a file name), after the kernel checks the validity of the parameter, but before the parameter is used in the actual implementation of the routine
bootloader
Program stored at a fixed position on disk (or flash RAM) to load the operating system into memory and start it executing
host operating system
An operating system that provides the abstraction of a virtual machine, to run another operating system as an application.
guest operating system
An operating system running in a virtual machine.
shell
A job control system implemented as a user-level process. When a user types a command to the shell, it creates a process to run the command
producer-consumer relationship
The shell can use a pipe to connect two programs together, so that the output of producer-consumer one is the input of another.
client-server relationship
Two-way communication between processes, where the client sends requests to the server to do some task, and when the operation is complete, the server replies back to the client.
microkernel
An operating system design where the kernel itself is kept small, and instead most of the functionality of a traditional operating system kernel is put into a set of user-level processes, or servers, accessed from user applications via interprocess communication.
monolithic kernel
An operating system design where most of the operating system functionality is linked together inside the kernel.
parent process
the process creator
child process
the process being created
UNIX fork
makes a complete copy of the parent process but with one exception in order to distinguish them
UNIX exec
this brings the new executable image into memory and starts it running
UNIX wait
pauses the parent until the child finishes or crashes or is terminated
concurrency
Multiple activities that can happen at the same time
thread
a single execution sequence that represents a separately schedulable task.
thread scheduler
Software that maps threads to processors by switching between running threads and threads that are ready but not running.
cooperative multithreading
Each thread runs without interruption until it explicitly relinquishes control of the processor
preemptive multithreading
The operating system scheduler may switch out a running thread, e.g., on a timer interrupt, without any explicit action by the thread to relinquish control at that point.
ready list
The set of threads that are ready to be run but which are not currently running.
waiting list
The set of threads that are waiting for a synchronization event or timer expiration to occur before becoming eligible to be run.
finished list
The set of threads that are complete but not yet de-allocated
kernel thread
A thread that is implemented inside the operating system kernel
thread context switch
Suspend execution of a currently running thread and resume execution of some other thread.
policy-mechanism separation
A system design principle where the implementation of an abstraction is independent of the resource allocation policy of how the abstraction is used.
green threads
A thread system implemented entirely at user-level without any reliance on operating system kernel services, other than those designed for single-threaded processes
scheduler activations
A multiprocessor scheduling policy where each application is informed of how many processors it has been assigned and whenever the assignment changes.
asynchronous I/O
A design pattern for system calls to allow a single-threaded process to make multiple concurrent I/O requests. When the process issues an I/O request, the system call returns immediately. The process later on receives a notification when the I/O completes.
event-driven programming
A coding design pattern where a thread spins in a loop; each iteration gets and processes the next I/O event
continuation
A data structure used in event-driven programming that keeps track of a task's current state and its next step
data parallel programming
A programming model where the computation is performed in parallel across all items in a data se.
mechanism
is how to do something, and
policy
is determining what to do or when to do it (i.e., a decision-making rule)
race condition
When the behavior of a program relies on the interleaving of operations of different threads.
atomic operation
Indivisible operations that cannot be interleaved with or split by other operations
memory barrier
An instruction that prevents the compiler and hardware from reordering memory accesses across the barrier —no accesses before the barrier are moved after the barrier and no accesses after the barrier are moved before the barrier.
synchronization variable
A synchronization primitive where n threads operating in parallel check in to the barrier when their work is completed. No thread returns from the barrier until all n check in.
state variable
Member variable of a shared object.
mutual exclusion
When one thread uses a lock to prevent concurrent access to a shared data structure.
lock
A type of synchronization variable used for enforcing atomic, mutually exclusive access to shared data.
critical section
A sequence of code that operates on shared state.
condition variable
A synchronization variable that enables a thread to efficiently wait for a change to shared state protected by a lock.
readers/writers lock
A lock which allows multiple "reader" threads to access shared data concurrently provided they never modify the shared data, but still provides mutual exclusion whenever a "writer" thread is reading or modifying the shared data
synchronization barrier
A synchronization primitive where n threads operating in parallel check in to the barrier when their work is completed. No thread returns from the barrier until all n check in.
starvation
The lack of progress for one task, due to resources given to higher priority tasks.
atomic read-modify-write instruction
A processor-specific instruction that lets one thread temporarily have exclusive and atomic access to a memory location while the instruction executes. Typically, the instruction (atomically) reads a memory location, does some simple arithmetic operation to the value, and stores the result
spinlock
A lock where a thread waiting for aBUSY lock "spins" in a tight loop until some other thread makes it FREE
Serialization
Event A must happen before Event B
Mutual exclusion
Events A and B must not happen at the same time
false sharing
Extra inter-processor communication required because a single cache entry contains portions of two different data structures with different sharing patterns.
fine-grained locking
A way to increase concurrency by partitioning an object's state into different subsets each protected by a different lock.
ownership design pattern
A technique for managing concurrent access to shared objects in which at most one thread owns an object at any time, and therefore the thread can access the shared data without a lock
test and test-and-set
An implementation of a spinlock where the waiting processor waits until the lock is FREE before attempting to acquire it
compare-and-swap
An atomic read-modify-write instruction that first tests the value of a memory location, and if the value has not been changed, sets it to a new value.
MCS lock
An efficient spinlock implementation where each waiting thread spins on a separate memory location
read-copy-update (RCU)
A synchronization abstraction that allows concurrent access to a data structure by multiple readers and a single writer at a time
publish (RCU)
For a read-copy-update lock, a single, atomic memory write that updates a shared object protected by the lock. The write allows new reader threads to observe the new version of the object.
grace period (RCU)
For a shared object protected by a read-copy-update lock, the time from when a new version of a shared object is published until the last reader of the old version is guaranteed to be finished.
lock-free data structures
Concurrent data structure that guarantees progress for some thread: some method will finish in a finite number of steps, regardless of the state of other threads executing in the data structure.
deadlock
A cycle of waiting among a set of threads, where each thread waits for some other thread in the cycle to take some action.
bounded resources
A necessary condition for deadlock: there are a finite number of resources that threads can simultaneously use.
no preemption
A necessary condition for deadlock to occur: once a thread acquires a resource, its ownership cannot be revoked until the thread acts to release it.
wait while holding
A necessary condition for deadlock to occur: a thread holds one resource while waiting for another.
circular waiting
A necessary condition for deadlock to occur: there is a set of threads such that each thread is waiting for a resource held by another.
lock ordering
A widely used approach to prevent deadlock, where locks are acquired in a pre-determined order.
safe state (deadlock)
A state of an execution such that regardless of the sequence of future resource requests, there is at least one safe sequence of decisions as to when to satisfy requests such that all pending and future requests are met
unsafe state (deadlock)
A state of an execution such that there is at least one sequence of future resource requests that leads to deadlock no matter what processing order is tried.
deadlocked state
The system has at least one deadlock.
workload
A set of tasks for some system to perform, along with when each task arrives and how long each task takes to complete
fairness
Partitioning of shared resources between users or applications either equally or balanced according to some desired priorities.
compute-bound task
A task that primarily uses the processor and does little I/O
I/O-bound task
A task that primarily does I/O, and does little processing.
preemption
When a scheduler takes the processor away from one task and gives it to another
time quantum (a.k.a. time slice)
The length of time that a task is scheduled before being preempted.
round robin
A scheduling policy that takes turns running each ready task for a limited period before switching to the next task.
short-term scheduler (a.k.a. dispatcher)
runs after every interrupt
medium-term scheduler (a.k.a. swapper)
every few seconds
long-term scheduler (a.k.a. batch job initiator)
runs on the order of minutes
affinity scheduling
A scheduling policy where tasks are preferentially scheduled onto the same processor they had previously been assigned, to improve cache reuse.
oblivious scheduling
A scheduling policy where the operating system assigns threads to processors without knowledge of the intent of the parallel application.