OpenMP and Parallel Programming Concepts

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/29

flashcard set

Earn XP

Description and Tags

These flashcards cover key concepts and terminology related to OpenMP and parallel programming.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

30 Terms

1
New cards

Shared Memory

Memory that can be accessed by multiple threads concurrently, allowing them to communicate and synchronize their actions.

2
New cards

OpenMP

An API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran.

3
New cards

Static Scheduling

A scheduling method where the assignments of iterations to threads are decided at compile time.

4
New cards

Dynamic Scheduling

A scheduling method that assigns iterations to threads during execution based on availability, suitable for workload variations.

5
New cards

Reduction Clause

A method that allows variables to be combined from multiple threads into a single result to avoid race conditions.

6
New cards

Nested Parallelism

The ability for a parallel region to create additional parallel regions within it, allowing for more complex thread hierarchies.

7
New cards

Critical Section

A section of code that must be executed by only one thread at a time to avoid race conditions when accessing shared resources.

8
New cards

Private Variables

Variables declared within a parallel region that are private to each thread, thus preventing unintended interference.

9
New cards

firstprivate

A clause in OpenMP that initializes private copies of a variable with the value of the original variable outside the parallel region.

10
New cards

taskwait

A directive that forces the thread to wait until all previously encountered tasks have completed.

11
New cards

ompsetnum_threads()

A function used to specify the number of threads that will be used in the next parallel region.

12
New cards

omp task

A directive for creating tasks that can run concurrently, allowing for dynamic workload distribution among threads.

13
New cards

pragma omp parallel for

A directive used to parallelize loops by distributing the iterations across multiple threads automatically.

14
New cards

firstprivate vs lastprivate

firstprivate initializes thread private copies with the outer variable value, while lastprivate assigns the last thread's value back to the outer variable.

15
New cards

Sections Directive

A directive for dividing a block of code into independent sections, where each section is executed by a single thread.

16
New cards

OMPNUMTHREADS

An environment variable that sets the default number of threads for OpenMP programs.

17
New cards

Data Race

A condition where multiple threads access shared data concurrently and at least one thread modifies it.

18
New cards

ompgetthread_num()

A function that returns the unique identifier (ID) of the executing thread within a parallel region.

19
New cards

Atomic Directive

A directive that ensures a specific operation on a variable is done atomically to prevent data races.

20
New cards

Global Synchronization

A method that ensures all threads reach a certain point in execution before proceeding, often necessitated by dependencies between tasks.

21
New cards

GSLP Execution Model

Globally Sequential Locally Parallel model where the overall program runs in a sequential manner but can have parallel execution within smaller parts.

22
New cards

collapse clause

A clause that combines multiple nested loops into a single iteration space for parallelization.

23
New cards

pragma omp target

A directive that offloads computations to a target device, usually a GPU, enabling parallel execution.

24
New cards

threadprivate

A directive that makes a global variable private to each thread across multiple parallel regions.

25
New cards

OMPMAXACTIVE_LEVELS

An environment variable that sets the maximum number of active nested parallel regions in an OpenMP program.

26
New cards

Compiler Directives

Special instructions that guide the compiler on how to treat portions of code in relation to parallel execution.

27
New cards

Scope Clauses in OpenMP

Clauses used to define the scope of variables in parallel regions, such as shared, private, and reduction.

28
New cards

Task Scheduling

The process of managing the execution of tasks created by the task directive, allowing for more flexible parallelization.

29
New cards

Synchronization

The coordination of multiple threads to ensure they properly work together without conflict.

30
New cards

Resource Sharing

The practice of allowing multiple threads to access the same resources while managing potential conflicts and data integrity.