Concurrent Systems and Parallelism

0.0(0)
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/35

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

36 Terms

1
New cards

Sequential Processing

All of the algorithms we’ve seen so far are sequential:

  1. They have one “thread” of execution.

  2. One step follows another in sequence.

  3. One processor is all that is needed to run the algorithm.

2
New cards

Non-sequential Processing (Burglar Alarm System)

• Consider a house with a burglar alarm system.

• The system continually monitors:

  1. The front door.

  2. The back door.

  3. The sliding glass door.

  4. The door to the deck.

  5. The kitchen windows.

  6. The living room windows.

  7. The bedroom windows.

• The burglar alarm is watching all of these “at once” (at the same time).

3
New cards

Non-sequential Processing (Car Digital Dashboard)

• Your car has an onboard digital dashboard that simultaneously:

  1. Calculates how fast you’re going and displays it on the speedometer.

  2. Checks your oil level.

  3. Checks your fuel level and calculates consumption.

  4. Monitors the heat of the engine and turns on a light if it is too hot .

  5. Monitors your alternator to make sure it is charging your battery.

4
New cards

Concurrent Systems

A system in which:

  1. Multiple tasks can be executed at the same time.

  2. The tasks may be duplicates of each other, or distinct tasks.

  3. The overall time to perform the series of tasks is reduced.

5
New cards
  1. Reduce Duplication of Code.

  2. Algorithm Runtime Reduced.

  3. Real-world Problems can be Solved.

  4. Redundancy makes Systems Reliable.

4 Advantages of Concurrency

6
New cards
  1. Runtime is Not Always Reduced.

  2. More Complex.

  3. Corrupted Data.

  4. Communication Needed.

4 Disadvantages of Concurrency

7
New cards

Multiprocessor Machines

Many computers today have more than one processor (______________).

<p>Many computers today have more than one processor (______________).</p>
8
New cards

One Processor

Concurrency can also be achieved on a computer with only _________.

  1. The computer “juggles” jobs, swapping its attention to each in turn.

  2. Time Slicing” allows many users to get CPU resources.

  3. Tasks may be suspended while they wait for something, such as device I/O.

<p>Concurrency can also be achieved on a computer with only _________.</p><ol><li><p>The computer “<strong>juggles</strong>” jobs, swapping its attention to each in turn.</p></li><li><p>“<strong>Time Slicing</strong>” allows many users to get CPU resources.</p></li><li><p>Tasks may be suspended while they wait for something, such as device I/O.</p></li></ol><p></p>
9
New cards

Time Slicing

This allows many users to get CPU resources.

10
New cards

Concurrency

This is the execution of multiple tasks at the same time, regardless of the number of processors.

11
New cards
  1. Multi Programming

  2. Multi Processing

  3. Multi Tasking

  4. Distributed Systems

4 Types of Concurrent Systems

12
New cards

Multi Programming

  1. Share a single CPU among many users or tasks.

  2. May have a time-shared algorithm or a priority algorithm for determining which task to run next.

  3. Give the illusion of simultaneous processing through rapid swapping of tasks (interleaving).

<ol><li><p>Share a <strong>single CPU</strong> among many users or tasks. </p></li><li><p>May have a time-shared algorithm or a priority algorithm for determining which task to run next. </p></li><li><p>Give the illusion of simultaneous processing through <strong>rapid swapping of tasks</strong> (interleaving).</p></li></ol><p></p>
13
New cards

Interleaving

Multi Programming gives the illusion of simultaneous processing through rapid swapping of tasks (_________).

14
New cards

Multi Processing

  1. Executes multiple tasks at the same time.

  2. Uses multiple processors to accomplish the tasks.

  3. Each processor may also timeshare among several tasks.

  4. Has a shared memory that is used by all the tasks.

<ol><li><p>Executes <strong>multiple tasks</strong> at the same time.</p></li><li><p>Uses <strong>multiple processors</strong> to accomplish the tasks.</p></li><li><p>Each processor may also timeshare among several tasks.</p></li><li><p>Has a<strong> shared memory</strong> that is used by all the tasks.</p></li></ol><p></p>
15
New cards

Multi Tasking

  1. A single user can have multiple tasks running at the same time.

  2. Can be done with one or more processors.

  3. Used to be rare and for only expensive multiprocessing systems, but now most modern operating systems can do it.

<ol><li><p>A<strong> single user </strong>can have <strong>multiple tasks </strong>running at the same time.</p></li><li><p>Can be done with <strong>one or more processors</strong>.</p></li><li><p>Used to be rare and for only expensive multiprocessing systems, but now<strong> most modern operating systems can do it</strong>.</p></li></ol><p></p>
16
New cards

Distributed Systems

Multiple computers working together with no central program “in charge.”

<p>Multiple computers working together with no central program “in charge.”</p>
17
New cards
  1. No bottlenecks from sharing processors.

  2. No central point of failure.

  3. Processing can be localized for efficiency.

Distributed Systems (3 Advantages)

18
New cards
  1. Complexity.

  2. Communication Overhead.

  3. Distributed Control.

Distributed Systems (3 Disadvantages)

19
New cards

Parallelism

• Using multiple processors to solve a single task.

• Involves:

  1. Breaking the task into meaningful pieces.

  2. Doing the work on many processors.

  3. Coordinating and putting the pieces back together.

This is the execution of multiple processors on the same task.

<p>• Using multiple processors to solve a single task.</p><p>• Involves:</p><ol><li><p>Breaking the task into meaningful pieces.</p></li><li><p>Doing the work on many processors.</p></li><li><p>Coordinating and putting the pieces back together.</p></li></ol><p>This is the<strong> execution of multiple processors</strong> on the same task.</p>
20
New cards

Pipeline Processing

  1. Repeating a sequence of operations or pieces of a task.

  2. Allocating each piece to a separate processor and chaining them together produces a pipeline, completing tasks faster.

<ol><li><p>Repeating a<strong> sequence of operations or pieces of a tas</strong>k. </p></li><li><p>Allocating each piece to a <strong>separate processor </strong>and <strong>chaining them together</strong> produces a pipeline, completing tasks faster.</p></li></ol><p></p>
21
New cards

Pipeline Processing (Washer or Dryer Example)

  1. Suppose you have a choice between a washer and a dryer each having a 30 minutes cycle or;

  2. A washer/dryer with a one hour cycle.

  3. The correct answer depends on how much work you have to do.

22
New cards
  1. Automobile manufacturing.

  2. Instruction processing within a computer.

2 More Examples of Pipelined Tasks

<p>2 More Examples of Pipelined Tasks</p>
23
New cards

Task Queues

  1. A supervisor processor maintains a queue of tasks to be performed in shared memory.

  2. Each processor queries the queue, dequeues the next task, and performs it.

  3. Task execution may involve adding more tasks to the task queue.

<ol><li><p>A supervisor processor maintains a queue of tasks to be performed in shared memory.</p></li><li><p>Each processor queries the queue, dequeues the next task, and performs it. </p></li><li><p>Task execution may involve adding more tasks to the task queue.</p></li></ol><p></p>
24
New cards

Parallel Algorithms

This is a computational approach where multiple solution points or populations of solutions are utilized at each iteration of the algorithm. By employing parallel processors, these algorithms significantly enhance the speed of convergence to the final solution.

25
New cards

Parallel Bubblesort

We can use N/2 processors to do all the comparisons at once, “flopping” the pair-wise comparisons.

finishes in N time.

<p>We can use N/2 processors to do all the comparisons at once, “flopping” the pair-wise comparisons.</p><p>finishes in N time.</p>
26
New cards

Sequential Bubblesort

finishes in N^2 time.

27
New cards

Product Complexity

This is the amount of work per “time chunk” multiplied by the number of “time chunks” – the total work done.

  1. Got done in O(N) time, better than O(N2 ).

  2. Each time “chunk” does O(N) work.

  3. There are N time chunks.

  4. Thus, the amount of work is still O(N2 ).

28
New cards

Parallelization

This can reduce time, but it cannot reduce work. The product complexity cannot change or improve.


Parallelization is not free.

29
New cards

– Given an O(NLogN) algorithm and Log N processors, the algorithm will take at least O(N) time.

– Given an O(N3 ) algorithm and N processors, the algorithm will take at least O(N2 ) time.

How much improvement can parallelization provide?

30
New cards

Processors

They are limited by hardware.

31
New cards

A Power of 2

Typically, the number of processors is?

<p>Typically, the number of processors is?</p>
32
New cards

Not Help

Adding Processors:

• A program on one processor.

– Runs in X time

• Adding another processor.

– Runs in no more than X/2 time – Realistically, it will run in X/2 +  time because of overhead.

• At some point, adding processors will _____ and could degrade performance.

33
New cards

Controlled and Coordinated

Processors must be ______ and ________.

34
New cards

Extra Work

We need a way to govern which processor does what work; this involves ________.

35
New cards

Special Programming Language

Often the program must be written in a ____________ for parallel systems.

<p>Often the program must be written in a ____________ for parallel systems.</p>
36
New cards

Tasks

Relatively isolated units of computation.

• Should be roughly equal in duration.

• Duration of the unit of work must be much greater than overhead time.

• Policy decisions and coordination required for shared data.

Simpler algorithms are the easiest to parallelize.