Operating Systems Lecture on Parallel and Distributed Systems

0.0(0)
studied byStudied by 0 people
GameKnowt Play
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/11

flashcard set

Earn XP

Description and Tags

These flashcards cover key concepts, definitions, and metrics related to Parallel and Distributed Systems discussed in the lecture.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

12 Terms

1
New cards

What are the common performance metrics for parallel systems?

Execution time, Speedup, Efficiency, Cost, Scalability.

2
New cards

How is Speedup defined in parallel computing?

Speedup = TS / TP, where TS is the execution time of the best sequential algorithm.

3
New cards

What is the formula to calculate Communication Cost in parallel systems?

Tcomm = α + β·size.

4
New cards

What does Efficiency measure in parallel systems?

Efficiency is the fraction of time for which a processing element is usefully employed.

5
New cards

What is Scalability in the context of parallel systems?

Scalability is the capacity to increase speedup in proportion to the number of processing elements.

6
New cards

What does the term 'Execution Time' refer to?

It's the elapsed time from the start of computation to the finish of the last processing element.

7
New cards

What two dimensions do parallel programs need to accommodate for Scalability?

Problem size and Number of processing elements.

8
New cards

What is the meaning of 'Super-linear Speedup'?

When Speedup > p, indicating that extra hardware advantages are exploited by parallelization.

9
New cards

What is the significance of the variable α in Communication Cost?

α represents the per message cost.

10
New cards

What is the outcome of the SUMMA algorithm?

The SUMMA algorithm computes the matrix C = A·B using q2 processors efficiently.

11
New cards

What is the total parallel overhead formula?

Total parallel overhead = pTP - TS.

12
New cards

What will the next topic be after parallel and distributed computing?

Memory Management.