L2 - Concurrent Systems Parallelism

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/68

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

69 Terms

1
New cards

Sequential Processing

– They have one “thread” of execution

– One step follows another in sequence

– One processor is all that is needed to run the algorithm

2
New cards

sequential

All of the algorithms we’ve seen so far are ___________

3
New cards

one “thread”

algorithm have __________ of execution

4
New cards

one processor

How many processor is needed to run the algorithm?

5
New cards

Non-sequential

The burglar alarm is watching all of these “at once” (at the same time).

6
New cards

Concurrent Systems

Multiple tasks can be executed at the same time

7
New cards

Concurrent Systems

The tasks may be duplicates of each other, or distinct tasks

8
New cards

Concurrent Systems

The overall time to perform the series of tasks is reduced

9
New cards

true

tru or false? Concurrent processes can reduce duplication in code

10
New cards

runtime

The overall _______ of the algorithm can be significantly reduced.

11
New cards

Redundancy

can make systems more reliable.

12
New cards

false

tru or false? Runtime is always reduced, so careful planning is required

13
New cards

multiprocessor machines

Many computers today have more than one processor

14
New cards

Time slicing

allows many users to get CPU resources

15
New cards

true

tru or false? Concurrency can also be achieved on a computer with only one processor

16
New cards

false

tru or false? Tasks won’t be suspended while they wait for something, such as device I/O

17
New cards

Concurrency

is the execution of multiple tasks at the same time, regardless of the number of processors.

18
New cards

Parallelism

is the execution of multiple processors on the same task.

19
New cards
  • Multiprogramming

  • Multiprocessing

  • Multitasking

  • Distributed Systems

Types of Concurrent Systems

20
New cards

Multiprogramming

Share a single CPU among many users or tasks

21
New cards

Multiprogramming

May have a time-shared algorithm or a priority algorithm for determining which task to run next

22
New cards

Multiprogramming

Give the illusion of simultaneous processing through rapid swapping of tasks (interleaving).

23
New cards

Achieving Concurrency

knowt flashcard image
24
New cards

Multiprogramming

knowt flashcard image
25
New cards

Multiprocessing

Executes multiple tasks at the same time

26
New cards

Multiprocessing

Uses multiple processors to accomplish the tasks

27
New cards

shared memory

Multiprocessing has a ______ that is used by all the tasks

28
New cards

Multiprocessing

knowt flashcard image
29
New cards

Multitasking

A single user can have multiple tasks running at the same time.

30
New cards

Multitasking

Can be done with one or more processors.

31
New cards

Multitasking

Used to be rare and for only expensive multiprocessing systems, but now most modern operating systems can do it

32
New cards

Multitasking

knowt flashcard image
33
New cards

Distributed Systems

Multiple computers working together with no central program “in charge.”

34
New cards

Distributed Systems

knowt flashcard image
35
New cards

Advantages of Distributed Systems

  • No bottlenecks from sharing processors

  • No central point of failure

  • Processing can be localized for efficiency

36
New cards

Disadvantages of Distributed Systems

  • Complexity

  • Communication overhead

  • Distributed control

37
New cards

Parallelism

Using multiple processors to solve a single task.

38
New cards

Parallelism

– Breaking the task into meaningful pieces

– Doing the work on many processors

– Coordinating and putting the pieces back together.

39
New cards

Parallelism

knowt flashcard image
40
New cards

Pipeline Processing

Repeating a sequence of operations or pieces of a task.

41
New cards

pipeline

Allocating each piece to a separate processor and chaining them together produces a ________, completing tasks faster.

42
New cards

One Load

knowt flashcard image
43
New cards

Three Loads

knowt flashcard image
44
New cards

Pipelined task

knowt flashcard image
45
New cards

3 & 4

which time is the busiest?

<p>which time is the busiest?</p>
46
New cards

Task Queues

A supervisor processor maintains a queue of tasks to be performed in shared memory.

47
New cards

dequeues

Each processor queries the queue, _________ the next task and performs it

48
New cards

Parallel Bubblesort

We can use N/2 processors to do all the comparisons at once, “flopping” the pair-wise comparisons.

49
New cards

87 93 65 74 45 57 27 33

what’s next?

93 87 | 74 65 | 57 45 | 33 27

50
New cards

87 65 93 45 74 27 57 33

what’s next?

87 | 93 65 | 74 45 | 57 27 | 33

51
New cards

65 87 45 93 27 74 33 57

what’s next?

87 65 | 93 45 | 74 27 | 57 33

52
New cards

N2 time

Sequential bubblesort finishes in

53
New cards

N time

Parallel bubblesort finishes in

54
New cards

O(N2)

Thus, the amount of work is still

55
New cards

Product complexity

is the amount of work per “time chunk” multiplied by the number of “time chunks” – the total work done.

56
New cards

true

tru or false? Parallelization can reduce time, but it cannot reduce work.

57
New cards

true

tru or false? The product complexity cannot change or improve.

58
New cards

O(N) time

Given an O(NLogN) algorithm and Log N processors, the algorithm will take at least ________

59
New cards

O(N2) time

Given an O(N3) algorithm and N processors, the algorithm will take at least

60
New cards

hardware

Processors are limited by

61
New cards

power of 2

Typically, the number of processors is a

62
New cards

constant factor, 2K

Usually: The number of processors is a

63
New cards

x time

A program on one processor, runs in?

64
New cards

no more than X/2 time

Adding another processor runs in ?

65
New cards

X/2 + Ɛ time

Realistically, it will run in _______ because of overhead

66
New cards

not free

overhead of parallelization is

67
New cards

controlled and coordinated

processors must be _________ and ________

68
New cards

special programming language

Often the program must be written in a ___________________ for parallel systems.

69
New cards

Tasks

  • Relatively isolated units of computation

  • Should be roughly equal in duration

  • Duration of the unit of work must be much greater than overhead time

  • Policy decisions and coordination required for shared data

  • Simpler algorithm are the easiest to parallelize