1/35
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Sequential Processing
All of the algorithms we’ve seen so far are sequential:
They have one “thread” of execution.
One step follows another in sequence.
One processor is all that is needed to run the algorithm.
Non-sequential Processing (Burglar Alarm System)
• Consider a house with a burglar alarm system.
• The system continually monitors:
The front door.
The back door.
The sliding glass door.
The door to the deck.
The kitchen windows.
The living room windows.
The bedroom windows.
• The burglar alarm is watching all of these “at once” (at the same time).
Non-sequential Processing (Car Digital Dashboard)
• Your car has an onboard digital dashboard that simultaneously:
Calculates how fast you’re going and displays it on the speedometer.
Checks your oil level.
Checks your fuel level and calculates consumption.
Monitors the heat of the engine and turns on a light if it is too hot .
Monitors your alternator to make sure it is charging your battery.
Concurrent Systems
A system in which:
Multiple tasks can be executed at the same time.
The tasks may be duplicates of each other, or distinct tasks.
The overall time to perform the series of tasks is reduced.
Reduce Duplication of Code.
Algorithm Runtime Reduced.
Real-world Problems can be Solved.
Redundancy makes Systems Reliable.
4 Advantages of Concurrency
Runtime is Not Always Reduced.
More Complex.
Corrupted Data.
Communication Needed.
4 Disadvantages of Concurrency
Multiprocessor Machines
Many computers today have more than one processor (______________).
One Processor
Concurrency can also be achieved on a computer with only _________.
The computer “juggles” jobs, swapping its attention to each in turn.
“Time Slicing” allows many users to get CPU resources.
Tasks may be suspended while they wait for something, such as device I/O.
Time Slicing
This allows many users to get CPU resources.
Concurrency
This is the execution of multiple tasks at the same time, regardless of the number of processors.
Multi Programming
Multi Processing
Multi Tasking
Distributed Systems
4 Types of Concurrent Systems
Multi Programming
Share a single CPU among many users or tasks.
May have a time-shared algorithm or a priority algorithm for determining which task to run next.
Give the illusion of simultaneous processing through rapid swapping of tasks (interleaving).
Interleaving
Multi Programming gives the illusion of simultaneous processing through rapid swapping of tasks (_________).
Multi Processing
Executes multiple tasks at the same time.
Uses multiple processors to accomplish the tasks.
Each processor may also timeshare among several tasks.
Has a shared memory that is used by all the tasks.
Multi Tasking
A single user can have multiple tasks running at the same time.
Can be done with one or more processors.
Used to be rare and for only expensive multiprocessing systems, but now most modern operating systems can do it.
Distributed Systems
Multiple computers working together with no central program “in charge.”
No bottlenecks from sharing processors.
No central point of failure.
Processing can be localized for efficiency.
Distributed Systems (3 Advantages)
Complexity.
Communication Overhead.
Distributed Control.
Distributed Systems (3 Disadvantages)
Parallelism
• Using multiple processors to solve a single task.
• Involves:
Breaking the task into meaningful pieces.
Doing the work on many processors.
Coordinating and putting the pieces back together.
This is the execution of multiple processors on the same task.
Pipeline Processing
Repeating a sequence of operations or pieces of a task.
Allocating each piece to a separate processor and chaining them together produces a pipeline, completing tasks faster.
Pipeline Processing (Washer or Dryer Example)
Suppose you have a choice between a washer and a dryer each having a 30 minutes cycle or;
A washer/dryer with a one hour cycle.
The correct answer depends on how much work you have to do.
Automobile manufacturing.
Instruction processing within a computer.
2 More Examples of Pipelined Tasks
Task Queues
A supervisor processor maintains a queue of tasks to be performed in shared memory.
Each processor queries the queue, dequeues the next task, and performs it.
Task execution may involve adding more tasks to the task queue.
Parallel Algorithms
This is a computational approach where multiple solution points or populations of solutions are utilized at each iteration of the algorithm. By employing parallel processors, these algorithms significantly enhance the speed of convergence to the final solution.
Parallel Bubblesort
We can use N/2 processors to do all the comparisons at once, “flopping” the pair-wise comparisons.
finishes in N time.
Sequential Bubblesort
finishes in N^2 time.
Product Complexity
This is the amount of work per “time chunk” multiplied by the number of “time chunks” – the total work done.
Got done in O(N) time, better than O(N2 ).
Each time “chunk” does O(N) work.
There are N time chunks.
Thus, the amount of work is still O(N2 ).
Parallelization
This can reduce time, but it cannot reduce work. The product complexity cannot change or improve.
Parallelization is not free.
– Given an O(NLogN) algorithm and Log N processors, the algorithm will take at least O(N) time.
– Given an O(N3 ) algorithm and N processors, the algorithm will take at least O(N2 ) time.
How much improvement can parallelization provide?
Processors
They are limited by hardware.
A Power of 2
Typically, the number of processors is?
Not Help
Adding Processors:
• A program on one processor.
– Runs in X time
• Adding another processor.
– Runs in no more than X/2 time – Realistically, it will run in X/2 + time because of overhead.
• At some point, adding processors will _____ and could degrade performance.
Controlled and Coordinated
Processors must be ______ and ________.
Extra Work
We need a way to govern which processor does what work; this involves ________.
Special Programming Language
Often the program must be written in a ____________ for parallel systems.
Tasks
• Relatively isolated units of computation.
• Should be roughly equal in duration.
• Duration of the unit of work must be much greater than overhead time.
• Policy decisions and coordination required for shared data.
• Simpler algorithms are the easiest to parallelize.