Exam 2 CPSC 3220 PCE

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/65

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

66 Terms

1
New cards

race conditions

when the behavior of a program relies on the interleaving of operations of different threads

2
New cards

atomic operation

indivisible operations that cannot be interleaved with or split by other operations

3
New cards

memory barrier

an instruction that prevents the compiler and hardware from reordering memory accesses across the barrier - no accesses before the barrier are moved after the barrier and no acceses after the barrier are moved before the barrier

4
New cards

synchronization variable

a data structure used for coordinating concurrent access to shared state

5
New cards

state variable

member variable of a shared object

6
New cards

mutual exclusion

when one thread uses a lock to prevent concurrent access to a shared data structure

7
New cards

lock

a type of synchronization variable used for enforcing atomic, mutually exclusive access to shared data

8
New cards

critical section

a sequence of code that operates on a shared state

9
New cards

condition variable

a synchronization variable that enables a thread to efficiently wait for a change to shared state protected by a lock

10
New cards

5 steps for synchronizing accesses to a shared object in a multithreaded environment

add a lock

add code to acquire and release the lock

identify and add condition variables

add loops to wait using the condition variables

add signal and broadcast calls

11
New cards

reader/writer lock

lock which allows multiple reader threads to access shared data concurrently provided they never modify the shared data, but still provides mutual exclusion whenever a writer thread is reading or modifying the shared data

12
New cards

synchronization barrier

a synchronization primitive where n threads operating in parallel check in to the barrier when their work is completed. no thread returns from the barrier until all n check n

13
New cards

starvation

the lack of progress for one task due to resources given to higher priority tasks

14
New cards

atomic read-modify-write instruction

a processor-specific instruction that lets one thread temporarily have exclusive and atomic access to a memory location while the instruction executes. typically, the instruction (atomically) reads a memory location, does some simple arithmetic operation to the value, and stores the result

15
New cards

spinlock

a lock where a thread waiting for a busy lock “spins” in a tight loop until some other thread makes it free

16
New cards

Is there a lock/spinlock attribute defined for the cv class?

the purpose of the cv is to efficiently manage blocking and waking up threads based on conditions. a spin lock is unnecessary and add busy waiting

17
New cards

Serialization

event a must happen before event b

18
New cards

mutual exclusion

event a and b must not happen at the same time

19
New cards

"For purposes of synchronization, there is no difference between the parallel model and the multithread model. The issue is the same—within one processor (or one thread) we know the order of execution, but between processors (or threads) it is ___ ___ ___."

impossible to tell

20
New cards

Semaphore

a variable or abstract data type used to control access to a common resource by multiple threads and avoid critical section problems in a concurrent system such as a multitasking operating system

21
New cards

false sharing

extra inter-processor communication required because a single cache entry contains portions of two different data structures with different sharing patterns

22
New cards

fine-grained locking

a way to increase concurrency by partitioning an object’s state into different subsets each protected by a different lock

23
New cards

ownership design pattern

a technique for managing concurrent access to shared objects in which at most one thread owns an object at any time, and therefore the thread can access th shared data without a lock

24
New cards

test and test-and-set

an implementation of a spinlock where the waiting processor waits until the lock is FREE before attempting to aquire it

25
New cards

compare-and-swap

an atomic read-modify-write instruction that furst tests the value of a memory location, and if the calue has not been changed, sets it to a new value

26
New cards

Mellor-Crummey Scott or MCS lock

an efficient spinlock implementation where each waiting thread spins on a separate memory location

27
New cards

read-copy-update or RCU

a synchronization abstraction that allows concurrent access to a data structure by multiple readers and a single writer at a time

28
New cards

publish RCU

for a read-copy-update lock, a single, atomic memory write that updates a shared object protected by the lock. the write allows new reader threads to observe the new version of the object

29
New cards

grace period RCU

for a shared object protected by a read-copy-update lock, the time from when a new version of a shared object is published until the last reader of the od version is guaranteed to be finished

30
New cards

lock-free data structures

concurrent data structure that guarantees progress for some thread: some method will finish in a finite number of steps, regardless of the state of other threads executing in the data structure

31
New cards

deadlock

a cycle of waiting among a set of threads, where each thread waits for some other thread in the cycle to take some action

32
New cards

bounded resources

a necessary condition for deadlock: there a a finite number of resources that threads can simultaneously use

33
New cards

no preemption

a necessary condition for deadlock to occur: a thread acquires a resource, its ownership cannot be revoked until the thread acts to release it

34
New cards

wait while holding

a necessary condition for deadlock to occur: a thread hold one resource while waiting for another

35
New cards

circular waiting

a necessary condition for deadlock to occur: there is a set of threads such that each thread is waiting for a resource held by another

36
New cards

lock ordering

a widely used approach to prevent deadlock, where locks a acquired in a pre-determined order

37
New cards

safe state (deadlock)

in the context of deadlock, a state of an execution such that regardless of a sequence of future resource requests, there is at least one safe sequence of decisions as to when to satisfy requests such that all pending and future requests are met

38
New cards

unsafe state(deadlock)

in the context of deadlock, a state of an execution such that there is at least one sequence of future resource requests that leads to deadlock no matter what processing order is tried

39
New cards

deadlocked state

the system has at least one deadlock

40
New cards

two recovery techniques discussed

proceed without the resource and transactions rollback and retry

41
New cards

workload

a set of tasks for some system to perform, among with when each task arrives and how long each task takes to completer

42
New cards

response time

the time for a task to complete, from when it starts until it is done

43
New cards

throughput

the rate at which a group of tasks are complete

44
New cards

fairness

partitioning of shared resources between users or applications either equally or balanced according to some desired priorities

45
New cards

starvation

the lack of progress for one task, due to resources given to higher priority tasks

46
New cards

compute-bound task

a task that primarily uses the processor and does little I/O

47
New cards

I/O-bound task

a task that primarily does I/O and does little processing

48
New cards

preemption

when a scheduler takes the processor away from one task and gives it to another

49
New cards

time quantum or time slice

the length of time that a task it scheduled before being preempted

50
New cards

round robin

a scheduling policy that takes turns running each ready task for a limited period before switching to the next task

51
New cards

short-term scheduler or dispatcher runs

after every interrupt

52
New cards

medium-term scheduler or swapper runs

every few seconds

53
New cards

long-term scheduler or batch job initiator runs

on the order of minutes

54
New cards

affinity scheduling

scheduling policy where tasks are preferentially scheduled onto the same processor they had previously been assigned, to improve cache reuse

55
New cards

oblivious scheduling

scheduling policy where the os assigns threads to processors without knowledge of the intent of a parallel application

56
New cards

bulk synchronous

a type of parallel application where work is split into independent tasks and where each task completed before result of any of the tasks can be used

57
New cards

gang scheduling

policy for multiprocessors that performs all of the runnable tasks for a particular process at the same time

58
New cards

priority inversion

a scheduling anomaly that occurs when a high priority task waits indefinitely for a resource (such as a lock) held by a low priority task, because the low priority task is waiting in turn for a resource (such as the processor) held by a medium priority task

59
New cards

priority donation or inheritance

a solution to priority inversion: when a thread waits for a lock held by a lower priority thread, the lock holder is temporarily increased to the waiter’s priority until the lock is released

60
New cards

priority aging

a thread’s priority will decrease at is runs and increased as it waits

61
New cards

priority boosting

a thread’s priority is boosted when signaled

62
New cards

solutions to manage overload

reject some request in order to preserver response time

reduce service time per request under overload conditions

63
New cards

lock::acquire()

wait until lock is free, then take it

64
New cards

lock::release()

release lock, waking up anyone waiting for it

65
New cards

broadcast

condition variable wakes up all waiters if any

66
New cards

signal

condition variable wakes up waiter