Parallel Crash Course

0.0(0)
studied byStudied by 0 people
GameKnowt Live
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/45

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

46 Terms

1
New cards
Amdahl's Law
A formula used to find the maximum theoretical speedup of a program when only part of the program is parallelized. It states that the potential speedup is limited by the sequential portion of the task.
2
New cards
Asynchronous Programming
A programming technique that allows a program to initiate a long-running task and remain responsive to other events without waiting for the task to complete. The program is notified of the result once the task is finished.
3
New cards
Atomic Operations (std::atomic)
Operations that are guaranteed to complete without interruption from other threads, preventing race conditions on single data items without explicit locks.
4
New cards

Barrier (std::barrier)

A synchronization primitive where a group of threads or processes must wait at a specific point until all members of the group have reached that point. Unlike latches, these can be reused.

5
New cards
Binary Semaphore
A type of semaphore that can only have two values (0 or 1), often used as a mutex to ensure mutual exclusion for a critical section.
6
New cards
Concurrency
The ability of a system to handle multiple tasks seemingly at the same time, often by rapidly switching between them on a single processor, creating an illusion of simultaneous execution.
7
New cards
Conditional Variable (std::condition_variable)
A synchronization primitive used in C++ to allow threads to wait until a specific condition is met, typically used in conjunction with a mutex.
8
New cards
Counting Semaphore
A type of semaphore whose value can range over an unrestricted domain, used to control access to a resource that has multiple instances, where the value indicates the number of available resources.
9
New cards
Critical Section
A segment of code where shared resources are accessed. Only one thread should be allowed to execute within a critical section at any given time to prevent data corruption.
10
New cards
Deadlock
A state in concurrent computing where two or more threads are blocked indefinitely, each waiting for the other to release a resource.
11
New cards
detach()
A C++ std::thread member function that permits a thread to execute independently from its thread handle. The calling thread no longer waits for the detached thread to complete.
12
New cards
Distributed Computing
A model in which components of a software system are spread across multiple computers, communicating and coordinating to achieve a common goal.
13
New cards
File Mapping
An Interprocess Communication (IPC) method that allows multiple processes to access a shared file as if it were part of their own address space, effectively creating shared memory.
14
New cards
File (IPC)
A data record stored on disk or acquired on demand, which multiple processes can access as a means of interprocess communication.
15
New cards
Flynn's Taxonomy
A classification system for computer architectures based on the number of instruction streams and data streams processed simultaneously (SISD, SIMD, MISD, MIMD).
16
New cards
std::future
A C++ template that provides a mechanism to access the result of an asynchronous operation, acting as a placeholder for a value that may not yet be available.
17
New cards
Interprocess Communication (IPC)
Mechanisms provided by an operating system that allow different processes to communicate and synchronize with each other, sharing data or coordinating events.
18
New cards
join()
A C++ std::thread member function that blocks the calling thread until the thread it is called on has finished its execution.
19
New cards
Latch (std::latch)
A downward counter synchronization primitive in C++20 that threads can block on until the counter reaches zero. It is a single-use barrier and cannot be reset.
20
New cards
Liveness
A set of properties in concurrent programming that guarantee that processes or threads will eventually make progress and not get stuck indefinitely (e.g., freedom from deadlock, starvation, livelock).
21
New cards
Livelock
A situation in concurrent computing similar to deadlock, where two or more threads are actively changing their states in response to each other, but without making any useful progress.
22
New cards
std::lock_guard
A C++ RAII (Resource Acquisition Is Initialization) wrapper for a mutex that automatically locks the mutex upon construction and unlocks it when it goes out of scope, simplifying mutex management and preventing forgotten unlocks.
23
New cards
Message Queues
An IPC mechanism where multiple processes can read and write data to a queue without being directly connected, with messages stored until retrieved by their recipient.
24
New cards
MIMD (Multiple Instruction, Multiple Data)
An architecture where multiple processors can execute different instructions on different data streams simultaneously, common in multi-core systems.
25
New cards
MISD (Multiple Instruction, Single Data)
An architecture where multiple instructions operate on the same data stream, often found in fault-tolerant systems.
26
New cards
Moore's Law
An observation stating that the number of transistors on an integrated circuit doubles approximately every two years, historically leading to significant increases in sequential processing power.
27
New cards
Mutex (std::mutex)
Short for "mutual exclusion," a synchronization primitive used to protect shared data from simultaneous access by multiple threads, ensuring only one thread can access a critical section at a time.
28
New cards
Parallel Computing
A type of computation where many calculations or the execution of processes are carried out simultaneously, typically on multiple processors or cores, to solve a large problem faster.
29
New cards
Parallelism
When multiple tasks truly run at the exact same time on distinct processing units (e.g., multiple CPU cores).
30
New cards
Pipe
A unidirectional data channel used for interprocess communication, typically between related processes (parent-child). Two pipes can form a bidirectional channel.
31
New cards
Process
An instance of a computer program that is being executed. Each process has its own separate memory address space and resources.
32
New cards
Producer-Consumer Problem
A classic synchronization problem in concurrent programming where one or more "producer" threads generate data and one or more "consumer" threads process it, often using a shared buffer.
33
New cards
Race Condition
An undesirable situation in concurrent programming where the outcome of shared data access depends on the non-deterministic order of execution of multiple threads.
34
New cards
Read-Write Lock (std::shared_mutex)
A synchronization primitive that allows multiple threads to read a shared resource concurrently (shared read mode) but provides exclusive access for writing (exclusive write mode).
35
New cards
Reentrant Lock (std::recursive_mutex)
A type of mutex that allows the same thread to acquire the lock multiple times without causing a deadlock, keeping a count of how many times it has been locked by the owning thread.
36
New cards
Semaphore
(See Binary Semaphore and Counting Semaphore)
37
New cards
Sequential Computing
A computing paradigm where tasks are executed one after another in a linear, step-by-step fashion without overlapping time periods.
38
New cards
Shared Memory
An Interprocess Communication (IPC) mechanism where a region of memory is simultaneously accessible by multiple processes, allowing for very fast data exchange.
39
New cards
Signal
A limited form of IPC, system messages sent from one process to another, primarily used for remote commands or event notifications rather than data transfer.
40
New cards
SIMD (Single Instruction, Multiple Data)
An architecture where a single instruction operates on multiple data streams simultaneously, common in vector processors and GPUs.
41
New cards
SISD (Single Instruction, Single Data)
A traditional architecture where a uniprocessor machine executes a single instruction on a single data stream sequentially.
42
New cards
Socket
An endpoint for sending or receiving data in a network, used for IPC between processes on the same computer or between different computers across a network.
43
New cards
Software Crisis
A term coined in the late 1960s to describe the difficulties and challenges in developing and maintaining complex software systems.
44
New cards
Starvation
A liveness issue in concurrent programming where a thread is repeatedly denied access to a shared resource or CPU time, preventing it from making progress.
45
New cards
Thread
The smallest unit of execution within a process. A single process can contain multiple threads, sharing the process's memory space but having independent execution paths.
46
New cards
try_lock()
A std::mutex member function that attempts to acquire a lock without blocking. It returns true if successful and false immediately if the lock is already held by another thread.