Practice parrallel quiz

0.0(0)
Studied by 0 people
call kaiCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/30

encourage image

There's no tags or description

Looks like no tags are added yet.

Last updated 7:04 PM on 3/13/26
Name
Mastery
Learn
Test
Matching
Spaced
Call with Kai

No analytics yet

Send a link to your students to track their progress

31 Terms

1
New cards

A program has 80% of its code parallelizable and 20% strictly serial. What is the maximum speedup possible with infinite processors according to Amdahl's Law?

5x maximum speedup because S = 1 / (0.2 + 0/∞) = 1 / 0.2 = 5.

2
New cards

A program runs in 100 seconds serially. With 4 processors it runs in 30 seconds. What is the speedup and efficiency?

Speedup = 100/30 ≈ 3.33. Efficiency = 3.33/4 ≈ 0.83 (83%).

3
New cards

A parallel program shows very little improvement when adding more processors. What is the most likely theoretical explanation?

Amdahl's Law: the serial portion of the program limits total possible speedup.

4
New cards

You distribute a large array across 8 processors and each processor calculates the sum of its section before combining results. What type of parallelism is this?

Data parallelism.

5
New cards

Each processor performs a different stage of a computation pipeline (input, processing, output). What type of parallelism is this?

Task parallelism.

6
New cards

Why can a shared-memory system suffer from contention when many processors access the same variable?

Multiple processors attempt to access the same memory location simultaneously, causing delays due to memory synchronization.

7
New cards

Why are distributed-memory systems often more scalable than shared-memory systems?

Because each processor has its own memory, reducing contention and allowing more processors to be added without a shared bottleneck.

8
New cards

A program uses MPI_Reduce to sum values from all processors. Which processor receives the final result?

The root process.

9
New cards

When would MPIAllreduce be preferred over MPIReduce?

When all processors need the final reduced value rather than just the root.

10
New cards

What problem occurs if two threads update the same variable simultaneously without synchronization?

Race condition.

11
New cards

Why must critical sections be protected by synchronization mechanisms?

To ensure only one thread accesses shared resources at a time and prevent inconsistent results.

12
New cards

What is the difference between a mutex and a semaphore?

A mutex allows only one thread to access a resource, while a semaphore allows a limited number of threads based on a counter.

13
New cards

What situation can lead to deadlock in parallel programs?

When multiple processes hold resources while waiting for other resources held by each other in a circular dependency.

14
New cards

Why can MPISend followed by MPIRecv between two processes sometimes cause deadlock?

Both processes may wait indefinitely for the other to send first.

15
New cards

How does MPI_Sendrecv help prevent deadlock?

It performs the send and receive simultaneously within a single call.

16
New cards

Why are atomic operations important in multithreaded programs?

They ensure operations complete fully without interruption, preventing race conditions.

17
New cards

Why might adding more processors decrease efficiency in a parallel program?

Increased communication overhead, synchronization costs, and idle processors reduce efficiency.

18
New cards

Why is latency more important than bandwidth for small messages in parallel systems?

Because the startup delay dominates when little data is transmitted.

19
New cards

Why is bandwidth more important than latency for large data transfers?

Because the transfer rate determines how quickly large amounts of data move.

20
New cards

A GPU applies the same operation to millions of pixels simultaneously. What parallel model does this represent?

SIMD.

21
New cards

Multiple CPU cores independently execute different programs or tasks simultaneously. What model does this represent?

MIMD.

22
New cards

Why is the Von Neumann bottleneck still relevant in modern computing?

The CPU can process instructions faster than memory can supply data, limiting performance.

23
New cards

Why is load balancing important in parallel programs?

If work is unevenly distributed, some processors finish early and sit idle while others continue working.

24
New cards

A parallel system shows speedup of 7 when using 8 processors. What does this indicate about efficiency?

Efficiency = 7/8 = 87.5%, which is relatively high.

25
New cards

If communication time becomes larger than computation time in a parallel program, what happens to scalability?

Scalability decreases because communication overhead dominates.

26
New cards

Why might distributed-memory systems require explicit communication (MPI)?

Because each processor has private memory and cannot directly access another processor's memory.

27
New cards

Why do threads in shared-memory systems communicate faster than MPI processes?

Threads share the same memory space and do not require message passing.

28
New cards

A program frequently locks and unlocks a mutex around a shared variable. What performance issue might occur?

Contention and reduced parallel performance due to excessive locking.

29
New cards

Why are embarrassingly parallel problems ideal for parallel computing?

They require little or no communication between processors.

30
New cards

A program divides work among processors but one processor receives much more work than others. What problem is this?

Load imbalance.

31
New cards

A reduction operation is performed repeatedly in a parallel program. What optimization could improve performance?

Tree-based reduction to reduce communication time.

Explore top notes

note
ap bio unit 8 review
Updated 1089d ago
0.0(0)
note
Chapter 6: Microbial Growth
Updated 1281d ago
0.0(0)
note
Chapter 15: Reconstruction
Updated 1281d ago
0.0(0)
note
A&P Exam 2
Updated 657d ago
0.0(0)
note
The Living World
Updated 903d ago
0.0(0)
note
ap bio unit 8 review
Updated 1089d ago
0.0(0)
note
Chapter 6: Microbial Growth
Updated 1281d ago
0.0(0)
note
Chapter 15: Reconstruction
Updated 1281d ago
0.0(0)
note
A&P Exam 2
Updated 657d ago
0.0(0)
note
The Living World
Updated 903d ago
0.0(0)

Explore top flashcards

flashcards
Set 11 Spanish
55
Updated 710d ago
0.0(0)
flashcards
AP Gov Chapter 6 Vocab
30
Updated 1160d ago
0.0(0)
flashcards
Intro to American Law Exam #1
89
Updated 899d ago
0.0(0)
flashcards
Kinesiology Exam 1
122
Updated 539d ago
0.0(0)
flashcards
Descubre 1: chapter 8, week 2
38
Updated 1118d ago
0.0(0)
flashcards
gov chapter 1
60
Updated 546d ago
0.0(0)
flashcards
Set 11 Spanish
55
Updated 710d ago
0.0(0)
flashcards
AP Gov Chapter 6 Vocab
30
Updated 1160d ago
0.0(0)
flashcards
Intro to American Law Exam #1
89
Updated 899d ago
0.0(0)
flashcards
Kinesiology Exam 1
122
Updated 539d ago
0.0(0)
flashcards
Descubre 1: chapter 8, week 2
38
Updated 1118d ago
0.0(0)
flashcards
gov chapter 1
60
Updated 546d ago
0.0(0)