ECE 7400 – Concurrent Programming & OpenMP Essentials

0.0(0)
studied byStudied by 0 people
full-widthCall with Kai
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/89

flashcard set

Earn XP

Description and Tags

Comprehensive vocabulary flashcards covering key terms and concepts from ECE 7400 lectures on concurrency, parallelism, synchronization primitives, design patterns, and OpenMP.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

90 Terms

1
New cards

Concurrent Computing

A computing model where multiple computations are executed during overlapping time periods, emphasizing correct handling of simultaneous execution flows.

2
New cards

Parallel Computing

A computing approach in which many calculations or processes are carried out simultaneously to achieve faster run-times.

3
New cards

Distributed Computing

Computation performed on components located on different networked computers that communicate to achieve a common goal.

4
New cards

Concurrency (in practice)

The situation of multiple execution flows (e.g., threads) accessing shared resources at the same time, not necessarily for speed.

5
New cards

Parallelism (in practice)

Using multiple processing resources (CPUs/cores) simultaneously to solve a problem faster.

6
New cards

Shared Resource

A data structure or device accessed by more than one thread or process concurrently.

7
New cards

Responsiveness

Ability of an application to remain reactive to user or system events by delegating time-consuming tasks to separate threads.

8
New cards

Failure Isolation

Design principle where an exception in one concurrent task does not stop other tasks from running.

9
New cards

Moore’s Law

Observation that transistor counts on integrated circuits double approximately every two years.

10
New cards

Multicore Machine

A single computer containing two or more independent processing cores on one chip.

11
New cards

GPU (Graphics Processing Unit)

Hardware accelerator originally for graphics; now widely used for massively parallel computations with many simple ALUs.

12
New cards

Flynn’s Taxonomy

Classification of computer architectures into SISD, SIMD, MISD, and MIMD based on number of instruction and data streams.

13
New cards

SISD

Single Instruction, Single Data – traditional sequential machine executing one instruction on one piece of data at a time.

14
New cards

SIMD

Single Instruction, Multiple Data – architecture where one instruction operates on multiple data items simultaneously.

15
New cards

MISD

Multiple Instructions, Single Data – rare architecture used mainly for fault-tolerant systems.

16
New cards

MIMD

Multiple Instructions, Multiple Data – architecture with many independent processors; includes most multicore CPUs and GPUs.

17
New cards

Shared-Memory Machine

Parallel system where multiple CPUs access a single, common memory address space.

18
New cards

Distributed-Memory Machine

Parallel system composed of separate computers that communicate via message passing; each has its own local memory.

19
New cards

Master-Worker Model

Shared-memory configuration where some processors have dedicated roles (e.g., I/O or graphics).

20
New cards

Symmetric Multiprocessor (SMP)

Shared-memory system in which all CPUs are identical and have equal access to memory.

21
New cards

Speedup

Ratio tseq / tpar measuring improvement of parallel program over its sequential counterpart.

22
New cards

Efficiency (Parallel)

Speedup divided by number of processing units; indicates average utilization of each unit.

23
New cards

Amdahl’s Law

Upper bound on speedup: 1 / ( (1 − α) + α/N ), showing limited gains if sequential fraction is non-zero.

24
New cards

Gustafson-Barsis Law

Alternative speedup model assuming problem size scales with processors; predicts higher potential gains than Amdahl.

25
New cards

Linear Speedup

Ideal case where speedup equals number of processors (efficiency = 100%).

26
New cards

Super-Linear Speedup

Rare situation where speedup exceeds number of processors, often due to cache effects.

27
New cards

Profiling

Measuring where a program spends time to guide optimization; done via instrumentation or sampling.

28
New cards

Instrumentation

Profiler technique that inserts extra code to gather execution data, requiring recompilation.

29
New cards

Sampling (Profiling)

Profiler method that periodically interrupts execution to record current function without modifying code.

30
New cards

Scalability

Ability of a parallel program to maintain efficiency as problem size or processor count increases.

31
New cards

Process

An executing instance of a program, containing one or more threads and its own resources.

32
New cards

Thread

Smallest unit of execution scheduled by the OS; shares process resources but has its own stack.

33
New cards

Fork (Process)

System call that duplicates a process, creating a child with its own copy of code and data.

34
New cards

Run-Time Stack

Per-thread memory region holding local variables, return addresses, and function call information.

35
New cards

Thread Spawning

Creating a new thread within a process to execute concurrently with the parent thread.

36
New cards

Race Condition

Program anomaly where result depends on relative timing of events due to unsynchronized access to shared data.

37
New cards

Data Race

Specific race condition where two or more threads access the same memory location concurrently and at least one access is a write.

38
New cards

Atomic Operation

Uninterruptible action that either completes entirely or not at all, preventing intermediate inconsistent states.

39
New cards

Critical Section

Portion of code that must be executed by only one thread at a time to avoid races.

40
New cards

Mutex (Mutual Exclusion)

Synchronization object allowing one thread at a time to enter a critical section.

41
New cards

Semaphore

Synchronization primitive with an integer counter and atomic acquire/release operations, used for locking, resource counting, or signaling.

42
New cards

Binary Semaphore

Semaphore initialized to 1, functioning similarly to a mutex.

43
New cards

Counting Semaphore

Semaphore initialized to value >1, representing available instances of a resource.

44
New cards

Deadlock

Situation where a group of threads cannot proceed because each is waiting for resources held by others.

45
New cards

Starvation

Condition where a thread is indefinitely delayed from making progress, often due to scheduling or resource contention.

46
New cards

Acquire (P)

Semaphore operation that decrements the counter and blocks the thread if the result is negative.

47
New cards

Release (V)

Semaphore operation that increments the counter and wakes a waiting thread if any exist.

48
New cards

Producer-Consumer Problem

Classical synchronization scenario where producers generate data placed into a buffer and consumers remove it.

49
New cards

Blocking

State in which a thread is suspended, waiting for a condition or resource before continuing.

50
New cards

RAII

‘Resource Acquisition Is Initialization’ – C++ idiom tying resource lifetime to object lifetime via constructors/destructors.

51
New cards

Smart Pointer

C++ template object that manages dynamic memory automatically using RAII semantics.

52
New cards

unique_ptr

Smart pointer type with sole ownership of a dynamically allocated object; non-copyable, move-only.

53
New cards

shared_ptr

Smart pointer allowing multiple owners of an object, with reference-counted lifetime management.

54
New cards

Lambda Function

Inline, unnamed function object capable of capturing variables from surrounding scope.

55
New cards

scoped_lock

RAII wrapper that acquires one or multiple mutexes upon construction and releases them on destruction, preventing deadlocks.

56
New cards

Condition Variable

Synchronization primitive that blocks threads until notified, always used with a mutex.

57
New cards

Spurious Wakeup

Phenomenon where a thread waiting on a condition variable wakes without a corresponding notification; requires re-checking condition.

58
New cards

Monitor Pattern

Design encapsulating shared data with the synchronization needed to access it within a single class/module.

59
New cards

PCAM Methodology

Parallel design process of Partitioning, Communication, Agglomeration, and Mapping proposed by Ian Foster.

60
New cards

Partitioning (PCAM)

Breaking computation or data into discrete tasks that could execute in parallel.

61
New cards

Communication (PCAM)

Identification of data exchanges required between tasks created during partitioning.

62
New cards

Agglomeration (PCAM)

Combining tasks to reduce communication and overhead.

63
New cards

Mapping (PCAM)

Assigning tasks or task groups to processors with load balancing and locality considerations.

64
New cards

Decomposition Pattern

Reusable strategy for breaking a problem into parallel tasks (e.g., geometric, divide-and-conquer, pipeline).

65
New cards

Geometric Decomposition

Splitting data structures like arrays or grids along dimensions to create independent sub-problems.

66
New cards

Divide-and-Conquer

Algorithmic technique that recursively splits a problem, solves sub-problems, and merges results; parallelizable via tasks.

67
New cards

Task Parallelism

Parallel model where different tasks (functions or stages) run concurrently rather than splitting data.

68
New cards

Pipeline Pattern

Parallel structure where data items pass through a sequence of stages, each handled by a separate task.

69
New cards

Globally Parallel, Locally Sequential (GPLS)

Program structure where multiple tasks execute concurrently but each task runs sequential code.

70
New cards

Globally Sequential, Locally Parallel (GSLP)

Program structure that runs sequentially overall but executes certain regions in parallel when advantageous.

71
New cards

OpenMP

API of compiler directives, runtime functions, and environment variables for shared-memory parallel programming in C/C++ and Fortran.

72
New cards

Parallel Region (OpenMP)

Code block executed simultaneously by a team of threads created with #pragma omp parallel.

73
New cards

Fork-Join Model

Execution paradigm where the master thread forks a team for parallel work and joins back after completion.

74
New cards

Structured Block

Single-entry, single-exit code block associated with an OpenMP directive.

75
New cards

Thread Team (OpenMP)

Set of threads that execute a particular parallel region.

76
New cards

Shared Variable (OpenMP)

Data visible to all threads in a parallel region; default for variables defined outside the region.

77
New cards

Private Variable (OpenMP)

Data for which each thread gets its own instance inside a parallel region.

78
New cards

Reduction Variable (OpenMP)

Variable for which each thread keeps a private copy whose values are combined at region end using a specified operation.

79
New cards

firstprivate

OpenMP clause giving each thread a private copy initialized with the master’s value.

80
New cards

Barrier (OpenMP)

Implicit or explicit synchronization point where threads wait until all have arrived.

81
New cards

Critical Region (OpenMP)

OpenMP construct ensuring that a block of code is executed by only one thread at a time.

82
New cards

atomic Directive (OpenMP)

Directive specifying that a single memory update is to be executed atomically without full critical-section overhead.

83
New cards

static Schedule

OpenMP loop schedule assigning fixed iteration chunks to threads in a round-robin manner.

84
New cards

dynamic Schedule

OpenMP schedule where threads request new iteration chunks at run-time, aiding load balance.

85
New cards

guided Schedule

Dynamic schedule where chunk size starts large and decreases over time for reduced overhead.

86
New cards

collapse Clause

OpenMP clause that merges perfectly nested loops to increase parallel work granularity.

87
New cards

Loop-Carried Dependency

Data dependence where one iteration of a loop relies on results from another, hindering parallelization.

88
New cards

task Directive (OpenMP)

Construct that packages a code block and data environment as a unit to be executed later by any thread.

89
New cards

depend Clause (OpenMP)

Task clause declaring data dependencies (in, out) to enforce execution ordering between tasks.

90
New cards

False Sharing

Performance degradation when threads repeatedly write to distinct variables that reside on the same cache line.