Parallel Computing Concepts

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/24

flashcard set

Earn XP

Description and Tags

Vocabulary flashcards for reviewing parallel computing terminology.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

25 Terms

1
New cards

Serial Computing

Software written for sequential instruction execution on a single processor.

2
New cards

Parallel Computing

Simultaneous use of multiple compute resources to solve a problem by breaking it into discrete, concurrent parts.

3
New cards

Multi-core Systems

Computer architecture trend making parallel programming a necessary practice.

4
New cards

Serial Process

A process run entirely by one core of one processor, executing tasks sequentially.

5
New cards

Parallel Process

A process divided among multiple cores, with sub-processes potentially sharing memory.

6
New cards

Von Neumann Architecture

A computer architecture where program instructions and data are stored in electronic memory.

7
New cards

Flynn's Taxonomy

Classification of computer architectures based on instruction and data streams (Single or Multiple).

8
New cards

SISD (Single Instruction, Single Data)

A serial, non-parallel computer executing one instruction stream on one data stream.

9
New cards

SIMD (Single Instruction, Multiple Data)

A parallel computer where all processing units execute the same instruction on different data elements.

10
New cards

MISD (Multiple Instruction, Single Data)

A parallel computer where multiple processing units operate independently on a single data stream.

11
New cards

MIMD (Multiple Instruction, Multiple Data)

A parallel computer where every processor may execute different instructions on different data streams.

12
New cards

Concurrency

A condition where multiple tasks are logically active at one time.

13
New cards

Parallelism

A condition where multiple tasks are actually active at one time.

14
New cards

Shared Memory

Parallel computers where processors can access all memory as a global address space.

15
New cards

UMA (Uniform Memory Access)

Shared memory architecture with identical processors and equal access times to memory.

16
New cards

NUMA (Non-Uniform Memory Access)

Shared memory architecture where not all processors have equal access time to all memories.

17
New cards

Distributed Memory

Systems where processors have their own local memory, requiring a network for inter-processor communication.

18
New cards

Hybrid Memory

Systems employing both shared and distributed memory architectures.

19
New cards

Parallel Programming Model

An abstraction of parallel computer architecture used to express algorithms and their composition in programs.

20
New cards

Data Parallelism

A single program applied to separate data sets across multiple processing units.

21
New cards

Task Parallelism

Multiple independent units of work running concurrently on multiple processors.

22
New cards

Fork-Join Pattern

Parallel programming pattern where threads are spawned in parallel and merged at the end.

23
New cards

OpenMP

An API for creating code that can run on a system of threads.

24
New cards

MPI (Message Passing Interface)

A library standard for handling parallel processing with flexibility in memory handling.

25
New cards

OpenMP Fork-Join Execution Model

OpenMP programs start with a master thread; parallel regions create thread groups, and join reduces back to the master thread.