1/11
These flashcards cover key concepts, definitions, and metrics related to Parallel and Distributed Systems discussed in the lecture.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What are the common performance metrics for parallel systems?
Execution time, Speedup, Efficiency, Cost, Scalability.
How is Speedup defined in parallel computing?
Speedup = TS / TP, where TS is the execution time of the best sequential algorithm.
What is the formula to calculate Communication Cost in parallel systems?
Tcomm = α + β·size.
What does Efficiency measure in parallel systems?
Efficiency is the fraction of time for which a processing element is usefully employed.
What is Scalability in the context of parallel systems?
Scalability is the capacity to increase speedup in proportion to the number of processing elements.
What does the term 'Execution Time' refer to?
It's the elapsed time from the start of computation to the finish of the last processing element.
What two dimensions do parallel programs need to accommodate for Scalability?
Problem size and Number of processing elements.
What is the meaning of 'Super-linear Speedup'?
When Speedup > p, indicating that extra hardware advantages are exploited by parallelization.
What is the significance of the variable α in Communication Cost?
α represents the per message cost.
What is the outcome of the SUMMA algorithm?
The SUMMA algorithm computes the matrix C = A·B using q2 processors efficiently.
What is the total parallel overhead formula?
Total parallel overhead = pTP - TS.
What will the next topic be after parallel and distributed computing?
Memory Management.