Parallel Computing - Midterms - Crash Course

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/83

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

84 Terms

1
New cards
2
New cards
Atomic
A C++ feature that ensures safe access to shared data in multi-threaded programs without using locks.
3
New cards
Atomic Operations
Operations that complete without interruption, preventing race conditions.
4
New cards
std::atomic
Guarantees atomicity for individual operations, not for sequences of operations.
5
New cards
std::atomic
Many atomic operations are lock-free, meaning they don't use mutexes and are faster.
6
New cards
Mutex
Stands for mutual exclusion.
7
New cards
Mutex
Ensures that only one thread can access a critical section at any one time.
8
New cards
Mutex
A synchronization primitive used to protect shared data from being accessed simultaneously by multiple threads.
9
New cards
std::mutex
Helps prevent data races and ensures data consistency.
10
New cards
lock()
Blocks the calling thread until it successfully locks the mutex.
11
New cards
unlock()
Releases the mutex, allowing other threads to acquire it.
12
New cards
try_lock()
Attempts to lock the mutex without blocking.
13
New cards
try_lock()
Returns true if successful, false otherwise.
14
New cards
std::lock_guard
An elegant way to manage mutexes and ensure that they are properly locked and unlocked.
15
New cards
std::lock_guard
Prevents potential deadlocks and race conditions.
16
New cards
std::lock_guard
Locks the mtx mutex when the guard object is created.
17
New cards
std::lock_guard
Automatically unlocks the mutex when the guard object goes out of scope.
18
New cards
Conditional Variables
Used for thread synchronization, allowing threads to wait until a particular condition is met.
19
New cards
Conditional Variables
Part of the library in C++.
20
New cards
cv.notify_one()
Notifies one waiting thread.
21
New cards
cv.wait(lock)
Waits until a condition is met and the lock can be acquired.
22
New cards
Reentrant Lock
A mutex that can be locked multiple times by the same thread.
23
New cards
Reentrant Lock
Keeps track of how many times it's been locked by the owning thread.
24
New cards
Reentrant Lock
Must be unlocked an equal number of times before another thread can lock it.
25
New cards
Reentrant Lock
Can be useful when a program needs to lock a mutex multiple times before unlocking it.
26
New cards

Reentrant Lock

Using a mutex that can prevent deadlocks that occur when a thread tries to lock a mutex it already holds.

27
New cards
Reentrant Lock
If not unlocked the same number of times it was locked, it can still lead to being stuck.
28
New cards
Reentrant Lock
Useful for recursive functions that lock a mutex.
29
New cards
Recursive Mutex
Another term for a reentrant mutex or recursive lock.
30
New cards
std::recursive_mutex
Allows the same thread to acquire the lock multiple times without causing a deadlock.
31
New cards
Try Lock
A non-blocking version of the lock or acquire method.
32
New cards
Try Lock
Returns immediately, indicating if the lock was acquired.
33
New cards
Try Lock
Returns TRUE if the mutex is available and locked.
34
New cards
Try Lock
Returns FALSE if the mutex is already possessed by another thread.
35
New cards
try_lock()
(C++ specific) Attempts to acquire a lock without blocking.
36
New cards
try_lock()
(C++ specific) Returns false immediately if the lock is already acquired by another thread.
37
New cards
Read-write Lock
A synchronization primitive that allows multiple threads to read a shared resource simultaneously or one thread to write exclusively.
38
New cards
Read-write Lock
Can be locked in shared read mode (multiple readers) or exclusive write mode (single writer).
39
New cards
Read-write Lock
Useful when there are many more reader threads than writer threads.
40
New cards
std::shared_mutex
Allows multiple threads to read from a shared resource simultaneously while ensuring exclusive access for writing.
41
New cards
Liveness
A set of properties that require concurrent programs to make progress.
42
New cards
Liveness
Guarantees that all processes will eventually make progress in a well-written program.
43
New cards
Deadlock
A situation where each member of a group is waiting for some other member to take action, resulting in no progress.
44
New cards
Deadlock
Can occur if a thread tries to lock a mutex it has already locked (if it's not a reentrant mutex).
45
New cards
Dining Philosophers Problem
A classic example illustrating synchronization issues with multiple threads and locks.
46
New cards
Abandoned Lock
Occurs when a thread acquires a lock and terminates unexpectedly without releasing it.
47
New cards
Starvation
Occurs when a thread is unable to gain access to a necessary resource and cannot make progress.
48
New cards
Starvation
Can be caused by higher priority threads being scheduled more often.
49
New cards
Starvation
Can be caused by having too many concurrent threads.
50
New cards
Livelock
Similar to deadlock, but threads are actively trying to resolve the problem, yet still make no progress.
51
New cards
Livelock
Can occur when threads are designed to respond to each other's actions.
52
New cards
Livelock
Often caused by algorithms intended to detect and recover from deadlock.
53
New cards
Concurrency
When multiple tasks can run in overlapping time periods.
54
New cards
Concurrency
An illusion of parallel execution due to fast CPU switching (on a single core).
55
New cards
Parallelism
When tasks actually run in parallel on multiple CPUs or cores.
56
New cards
Sequential Computing
Tasks are executed one after another, without overlapping time periods.
57
New cards
Flynn's Taxonomy
A classification of computer architectures based on instruction and data streams.
58
New cards
SISD (Single Instruction, Single Data)
A uniprocessor machine executing a single instruction on a single data stream.
59
New cards
SISD
Machine instructions are processed sequentially.
60
New cards
SIMD (Single Instruction, Multiple Data)
A multiprocessor machine executing the same instruction on all CPUs but operating on different data streams.
61
New cards
SIMD
Well suited for scientific computing involving vector and matrix operations.
62
New cards
MISD (Multiple Instruction, Single Data)
A multiprocessor machine executing different instructions on different PEs, but all operating on the same dataset.
63
New cards
MIMD (Multiple Instruction, Multiple Data)
A multiprocessor machine capable of executing multiple instructions on multiple data sets.
64
New cards
MIMD
Each PE has separate instruction and data streams.
65
New cards
MIMD
PEs work asynchronously.
66
New cards
Shared-memory MIMD
All PEs are connected to a single global memory.
67
New cards
Distributed-memory MIMD
All PEs have a local memory.
68
New cards
Amdahl's Law
States that the theoretical speedup of a program is determined by the fraction of code that can be parallelized.
69
New cards
Amdahl's Law
Demonstrates that the potential for speeding up a program using parallel computing is limited by its sequential parts.
70
New cards
Process
An instance of a running program.
71
New cards
Process
Has a separate memory address space and runs independently.
72
New cards
Thread
The unit of execution within a process.
73
New cards
Thread
A process can have one or many threads.
74
New cards
Multithreaded Processes
Processes containing more than one thread, accomplishing multiple tasks concurrently.
75
New cards
Race Condition
An undesirable situation when a device or system attempts to perform two or more operations at the same time that must be done in a proper sequence.
76
New cards
Producer-Consumer Problem
A classic synchronization problem where one or more producers generate data that is consumed by one or more consumers.
77
New cards

thread

C++ header for thread support.

78
New cards
std::thread
C++ class that manages a separate thread.
79
New cards
std::jthread
C++20 thread class with support for auto-joining and cancellation.
80
New cards
this_thread::get_id()
Returns the thread ID of the current thread.
81
New cards
this_thread::sleep_for()
Stops the execution of the current thread for a specified duration.
82
New cards
this_thread::sleep_until()
Stops the execution of the current thread until a specified time point.
83
New cards
join()
Waits for a thread to finish its execution.
84
New cards
detach()
Permits a thread to execute independently from the thread handle.