1/65
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
race conditions
when the behavior of a program relies on the interleaving of operations of different threads
atomic operation
indivisible operations that cannot be interleaved with or split by other operations
memory barrier
an instruction that prevents the compiler and hardware from reordering memory accesses across the barrier - no accesses before the barrier are moved after the barrier and no acceses after the barrier are moved before the barrier
synchronization variable
a data structure used for coordinating concurrent access to shared state
state variable
member variable of a shared object
mutual exclusion
when one thread uses a lock to prevent concurrent access to a shared data structure
lock
a type of synchronization variable used for enforcing atomic, mutually exclusive access to shared data
critical section
a sequence of code that operates on a shared state
condition variable
a synchronization variable that enables a thread to efficiently wait for a change to shared state protected by a lock
5 steps for synchronizing accesses to a shared object in a multithreaded environment
add a lock
add code to acquire and release the lock
identify and add condition variables
add loops to wait using the condition variables
add signal and broadcast calls
reader/writer lock
lock which allows multiple reader threads to access shared data concurrently provided they never modify the shared data, but still provides mutual exclusion whenever a writer thread is reading or modifying the shared data
synchronization barrier
a synchronization primitive where n threads operating in parallel check in to the barrier when their work is completed. no thread returns from the barrier until all n check n
starvation
the lack of progress for one task due to resources given to higher priority tasks
atomic read-modify-write instruction
a processor-specific instruction that lets one thread temporarily have exclusive and atomic access to a memory location while the instruction executes. typically, the instruction (atomically) reads a memory location, does some simple arithmetic operation to the value, and stores the result
spinlock
a lock where a thread waiting for a busy lock “spins” in a tight loop until some other thread makes it free
Is there a lock/spinlock attribute defined for the cv class?
the purpose of the cv is to efficiently manage blocking and waking up threads based on conditions. a spin lock is unnecessary and add busy waiting
Serialization
event a must happen before event b
mutual exclusion
event a and b must not happen at the same time
"For purposes of synchronization, there is no difference between the parallel model and the multithread model. The issue is the same—within one processor (or one thread) we know the order of execution, but between processors (or threads) it is ___ ___ ___."
impossible to tell
Semaphore
a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system
false sharing
extra inter-processor communication required because a single cache entry contains portions of two different data structures with different sharing patterns
fine-grained locking
a way to increase concurrency by partitioning an object’s state into different subsets each protected by a different lock
ownership design pattern
a technique for managing concurrent access to shared objects in which at most one thread owns an object at any time, and therefore the thread can access th shared data without a lock
test and test-and-set
an implementation of a spinlock where the waiting processor waits until the lock is FREE before attempting to aquire it
compare-and-swap
an atomic read-modify-write instruction that furst tests the value of a memory location, and if the calue has not been changed, sets it to a new value
Mellor-Crummey Scott or MCS lock
an efficient spinlock implementation where each waiting thread spins on a separate memory location
read-copy-update or RCU
a synchronization abstraction that allows concurrent access to a data structure by multiple readers and a single writer at a time
publish RCU
for a read-copy-update lock, a single, atomic memory write that updates a shared object protected by the lock. the write allows new reader threads to observe the new version of the object
grace period RCU
for a shared object protected by a read-copy-update lock, the time from when a new version of a shared object is published until the last reader of the od version is guaranteed to be finished
lock-free data structures
concurrent data structure that guarantees progress for some thread: some method will finish in a finite number of steps, regardless of the state of other threads executing in the data structure
deadlock
a cycle of waiting among a set of threads, where each thread waits for some other thread in the cycle to take some action
bounded resources
a necessary condition for deadlock: there a a finite number of resources that threads can simultaneously use
no preemption
a necessary condition for deadlock to occur: a thread acquires a resource, its ownership cannot be revoked until the thread acts to release it
wait while holding
a necessary condition for deadlock to occur: a thread hold one resource while waiting for another
circular waiting
a necessary condition for deadlock to occur: there is a set of threads such that each thread is waiting for a resource held by another
lock ordering
a widely used approach to prevent deadlock, where locks a acquired in a pre-determined order
safe state (deadlock)
in the context of deadlock, a state of an execution such that regardless of a sequence of future resource requests, there is at least one safe sequence of decisions as to when to satisfy requests such that all pending and future requests are met
unsafe state(deadlock)
in the context of deadlock, a state of an execution such that there is at least one sequence of future resource requests that leads to deadlock no matter what processing order is tried
deadlocked state
the system has at least one deadlock
two recovery techniques discussed
proceed without the resource and transactions rollback and retry
workload
a set of tasks for some system to perform, among with when each task arrives and how long each task takes to completer
response time
the time for a task to complete, from when it starts until it is done
throughput
the rate at which a group of tasks are complete
fairness
partitioning of shared resources between users or applications either equally or balanced according to some desired priorities
starvation
the lack of progress for one task, due to resources given to higher priority tasks
compute-bound task
a task that primarily uses the processor and does little I/O
I/O-bound task
a task that primarily does I/O and does little processing
preemption
when a scheduler takes the processor away from one task and gives it to another
time quantum or time slice
the length of time that a task it scheduled before being preempted
round robin
a scheduling policy that takes turns running each ready task for a limited period before switching to the next task
short-term scheduler or dispatcher runs
after every interrupt
medium-term scheduler or swapper runs
every few seconds
long-term scheduler or batch job initiator runs
on the order of minutes
affinity scheduling
scheduling policy where tasks are preferentially scheduled onto the same processor they had previously been assigned, to improve cache reuse
oblivious scheduling
scheduling policy where the os assigns threads to processors without knowledge of the intent of a parallel application
bulk synchronous
a type of parallel application where work is split into independent tasks and where each task completed before result of any of the tasks can be used
gang scheduling
policy for multiprocessors that performs all of the runnable tasks for a particular process at the same time
priority inversion
a scheduling anomaly that occurs when a high priority task waits indefinitely for a resource (such as a lock) held by a low priority task, because the low priority task is waiting in turn for a resource (such as the processor) held by a medium priority task
priority donation or inheritance
a solution to priority inversion: when a thread waits for a lock held by a lower priority thread, the lock holder is temporarily increased to the waiter’s priority until the lock is released
priority aging
a thread’s priority will decrease at is runs and increased as it waits
priority boosting
a thread’s priority is boosted when signaled
solutions to manage overload
reject some request in order to preserver response time
reduce service time per request under overload conditions
lock::acquire()
wait until lock is free, then take it
lock::release()
release lock, waking up anyone waiting for it
broadcast
condition variable wakes up all waiters if any
signal
condition variable wakes up waiter