5. Communication Model of Parallel Platforms
Communication Model of Parallel Platforms
Data exchange between parallel tasks primarily occurs through accessing a shared data space or exchanging messages.
Platforms supporting shared data space are called shared-address-space machines or multiprocessors.
Platforms supporting messaging are known as message passing platforms or multicomputers.
Shared-Address-Space Platforms
In shared-address-space platforms, all processors can access part or all of the memory.
Processors interact by modifying data objects stored in this shared address space.
Platforms are classified as either uniform memory access (UMA) or non-uniform memory access (NUMA) based on memory access time uniformity.
NUMA machines require algorithms with locality for better performance.
Programming shared-address-space platforms is easier, but coordination is needed for shared data access, leading to cache coherence issues.
Some platforms provide an address map without coordinated access, known as non-cache coherent shared address space machines.
Shared-Address-Space vs. Shared Memory Machines
Shared address space refers to a programming abstraction, while shared memory is a physical attribute of the machine.
It's possible to provide shared address space using physically distributed memory.
Message-Passing Platforms
These platforms consist of processors with exclusive memory and are common in clustered workstations and non-shared-address-space multicomputers.
Message passing platforms are programmed using send and receive primitives, with libraries like MPI and PVM providing such functionalities.
Message Passing vs. Shared Address Space Platforms
Message passing requires minimal hardware support, mainly relying on a network.
Shared address space platforms can emulate message passing but not as efficiently as the reverse scenario.