1/29
These flashcards cover key concepts and terminology related to OpenMP and parallel programming.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Shared Memory
Memory that can be accessed by multiple threads concurrently, allowing them to communicate and synchronize their actions.
OpenMP
An API that supports multi-platform shared memory multiprocessing programming in C, C++, and Fortran.
Static Scheduling
A scheduling method where the assignments of iterations to threads are decided at compile time.
Dynamic Scheduling
A scheduling method that assigns iterations to threads during execution based on availability, suitable for workload variations.
Reduction Clause
A method that allows variables to be combined from multiple threads into a single result to avoid race conditions.
Nested Parallelism
The ability for a parallel region to create additional parallel regions within it, allowing for more complex thread hierarchies.
Critical Section
A section of code that must be executed by only one thread at a time to avoid race conditions when accessing shared resources.
Private Variables
Variables declared within a parallel region that are private to each thread, thus preventing unintended interference.
firstprivate
A clause in OpenMP that initializes private copies of a variable with the value of the original variable outside the parallel region.
taskwait
A directive that forces the thread to wait until all previously encountered tasks have completed.
ompsetnum_threads()
A function used to specify the number of threads that will be used in the next parallel region.
omp task
A directive for creating tasks that can run concurrently, allowing for dynamic workload distribution among threads.
A directive used to parallelize loops by distributing the iterations across multiple threads automatically.
firstprivate vs lastprivate
firstprivate initializes thread private copies with the outer variable value, while lastprivate assigns the last thread's value back to the outer variable.
Sections Directive
A directive for dividing a block of code into independent sections, where each section is executed by a single thread.
OMPNUMTHREADS
An environment variable that sets the default number of threads for OpenMP programs.
Data Race
A condition where multiple threads access shared data concurrently and at least one thread modifies it.
ompgetthread_num()
A function that returns the unique identifier (ID) of the executing thread within a parallel region.
Atomic Directive
A directive that ensures a specific operation on a variable is done atomically to prevent data races.
Global Synchronization
A method that ensures all threads reach a certain point in execution before proceeding, often necessitated by dependencies between tasks.
GSLP Execution Model
Globally Sequential Locally Parallel model where the overall program runs in a sequential manner but can have parallel execution within smaller parts.
collapse clause
A clause that combines multiple nested loops into a single iteration space for parallelization.
A directive that offloads computations to a target device, usually a GPU, enabling parallel execution.
threadprivate
A directive that makes a global variable private to each thread across multiple parallel regions.
OMPMAXACTIVE_LEVELS
An environment variable that sets the maximum number of active nested parallel regions in an OpenMP program.
Compiler Directives
Special instructions that guide the compiler on how to treat portions of code in relation to parallel execution.
Scope Clauses in OpenMP
Clauses used to define the scope of variables in parallel regions, such as shared, private, and reduction.
Task Scheduling
The process of managing the execution of tasks created by the task directive, allowing for more flexible parallelization.
Synchronization
The coordination of multiple threads to ensure they properly work together without conflict.
Resource Sharing
The practice of allowing multiple threads to access the same resources while managing potential conflicts and data integrity.