Parallel Slides

Parallel Computing

Introduction

  • Definition: Parallel computing involves the simultaneous execution of multiple tasks to improve computational speed and performance.

  • Benefits include increased processing power and efficiency in handling large datasets.

Steps in Creating a Parallel Program

Key Phases
  • Partitioning: Dividing the task among various processors.

  • Types: Sequential tasks can be divided into processes for parallel execution.

  • Communication: Handling data exchange between subtasks to maintain consistency.

  • Execution: Each processor executes its assigned task concurrently.

  • Aggregation: Combining results from all processors to yield a final output.

Cost versus Performance

  • Cost vs. Performance Curve: Shows how improvements in computing power have historically led to reduced costs per unit of performance.

Evolution Over Decades
  • The curve highlights advancements from the 1960s to the 2000s, indicating a significant increase in cost-efficiency of computing resources over time.

What is Parallel Computing?

  • Analogy: Similar to a library task managed by multiple workers, distributing books.

  1. Task Partitioning: Assigning sets of books to different workers.

  2. Communication: Workers passing books among themselves as needed.

The Scope of Parallel Computing

  • Application Areas: Parallel computing supports complex algorithms, including:

    • Search Engines

    • AI Algorithms

    • Image Processing

    • Weather Prediction: Utilizing satellites and sensors for large data collection.

Example: Weather Modeling & Forecasting

  1. Modeling a large area and volume (3000 x 3000 miles, 11 miles high).

  2. Domain partitioning into billions of segments (approx. 1011 segments) for computation.

  3. After 100 updates over two days, a supercomputer would take about 280 hours to finalize predictions, showcasing inefficiency without parallel processing.

Efficient Solution
  • Utilizing 1000 parallel processors reduces prediction time to under 3 hours by dividing the computational load and leveraging each processor to compute different segments simultaneously.

Issues in Parallel Computing

  1. Design of Parallel Computers: Focusing on scalability and fast communication.

  2. Algorithms: Designing algorithms for effective parallel processing is complex compared to sequential algorithms.

  3. Evaluation of Performance: Analyzing metrics such as speed and efficiency for parallel algorithms.

  4. Programming Languages & Tools: Supports like Pthreads, MPI, and High-Performance Fortran (HPF) are essential for facilitating development in parallel environments.

  5. Portable Parallel Programs: Ensuring these applications can run on different architectures without significant modifications.

Models of Parallel Computers

Taxonomy of Parallel Architectures

  • Differentiation based on:

    • Control Mechanism (Global vs. Independent)

    • Address-Space Organization (Distributed vs. Shared Memory)

    • Types of Interconnection Networks

Control Mechanisms

  1. SIMD (Single Instruction, Multiple Data): Same instruction is executed across multiple processors.

  2. MIMD (Multiple Instruction, Multiple Data): Each processor can execute different instructions independently.

Address-Space Organization

  • Two models:

    1. Message-Passing Architecture: Independent memory with inter-process communication standards.

    2. Shared-Address Space Architecture: Unified memory accessible by all processors.

  • Performance evaluation of these architectures considers factors like efficiency and fault tolerance.

Interconnection Networks

  • Classification into static (fixed arrangement) or dynamic (adaptable connections).

  • Variety of structures including crossbar, bus-based, multistage systems.

Conclusion

  • Utility of Parallel Computing: Essential for modern computational tasks across various domains, with ongoing advancements aimed at handling ever-increasing data volumes and complexities in efficient manners.