Parallel Week 11

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/17

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

18 Terms

1
New cards

True or False, MPI is limited to Processor to Processor communication between two processes?

False

2
New cards

Communication between more than two threads is called BLANK

Communication between more than 2 threads is called collective communication

3
New cards

The BLANK thread has exclusive access to the standard input stream (can transmit to everyone at any time)

The master thread

4
New cards

What is the ideal broadcasting method (uses which 2 concepts), producing a complexity of O(n)?

The ideal broadcasting method is through MPI P2P

5
New cards

What is a hypercube algorithm?

An algorithm where data is distributed to dimensions that have not received data

6
New cards

In the hypercube algorithm, each node distributes data to which node(s)?

The data is distributed to nodes with a higher dimension

7
New cards

What is a higher dimension?

Those ranks that are above the most significant bit (2 of 16 (0010) distributes to 6 of 16 (0110) and 10 of 16 (1010))

8
New cards

Hypercube algorithms only work for blank graphs

they only work for complete graphs

9
New cards

MPI_Bcast() does what?

broadcasts the same information to all nodes in a network optimally

10
New cards

MPI Broadcast applies to which nodes?

All nodes of a communicator

11
New cards

What is a Scatter algorithm?

An algorithm that distributes memory (data) across different processes

12
New cards

With Scatter algorithms, what does each process receive?

A subset of the data

13
New cards

Scatter algorithms leave subset assignments to which node

To the source node

14
New cards

Describe Collective Communication with Scatter algorithms in 3 steps:

  1. Provide access to data for processing

  2. Get results back when finished

  3. Recombine

15
New cards

What does a gather algorithm do?

Collects distributed data from all communicators into the destination processes.H

16
New cards

In which 2 ways is MP Gather similar to MPI Scatter?

  1. Both use 2 buffers (source and dest)

  2. Calls are identical across participating processes

17
New cards

What is an MPI Reduce algorirthm, and how is it different from MPI Gather?

MPI Reduce collects data from distributed memory, but it applies an operation to reduce the received data to a single value

18
New cards

What is the Scan algorithm, and how does it differ from MPI Reduce?

It applies an operation to reduce the received data, but only on lower rank processes