1/24
Vocabulary flashcards for reviewing parallel computing terminology.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Serial Computing
Software written for sequential instruction execution on a single processor.
Parallel Computing
Simultaneous use of multiple compute resources to solve a problem by breaking it into discrete, concurrent parts.
Multi-core Systems
Computer architecture trend making parallel programming a necessary practice.
Serial Process
A process run entirely by one core of one processor, executing tasks sequentially.
Parallel Process
A process divided among multiple cores, with sub-processes potentially sharing memory.
Von Neumann Architecture
A computer architecture where program instructions and data are stored in electronic memory.
Flynn's Taxonomy
Classification of computer architectures based on instruction and data streams (Single or Multiple).
SISD (Single Instruction, Single Data)
A serial, non-parallel computer executing one instruction stream on one data stream.
SIMD (Single Instruction, Multiple Data)
A parallel computer where all processing units execute the same instruction on different data elements.
MISD (Multiple Instruction, Single Data)
A parallel computer where multiple processing units operate independently on a single data stream.
MIMD (Multiple Instruction, Multiple Data)
A parallel computer where every processor may execute different instructions on different data streams.
Concurrency
A condition where multiple tasks are logically active at one time.
Parallelism
A condition where multiple tasks are actually active at one time.
Shared Memory
Parallel computers where processors can access all memory as a global address space.
UMA (Uniform Memory Access)
Shared memory architecture with identical processors and equal access times to memory.
NUMA (Non-Uniform Memory Access)
Shared memory architecture where not all processors have equal access time to all memories.
Distributed Memory
Systems where processors have their own local memory, requiring a network for inter-processor communication.
Hybrid Memory
Systems employing both shared and distributed memory architectures.
Parallel Programming Model
An abstraction of parallel computer architecture used to express algorithms and their composition in programs.
Data Parallelism
A single program applied to separate data sets across multiple processing units.
Task Parallelism
Multiple independent units of work running concurrently on multiple processors.
Fork-Join Pattern
Parallel programming pattern where threads are spawned in parallel and merged at the end.
OpenMP
An API for creating code that can run on a system of threads.
MPI (Message Passing Interface)
A library standard for handling parallel processing with flexibility in memory handling.
OpenMP Fork-Join Execution Model
OpenMP programs start with a master thread; parallel regions create thread groups, and join reduces back to the master thread.