parallel lecture 18

Introduction to Parallel Sorting

  • Continuing discussion on parallel sorting in parallel programming.

  • Review of bubble sort from the last class:

    • Analysis of the sequential version.

    • Emphasis on the two nested for loops.

Sequential Bubble Sort Analysis

  • Sequential complexity derived without full knowledge of asymptotics.

    • Worst case involves swaps on every iteration of the inner loop.

    • Complexity noted as N choose 2 (binomial coefficient), which is the number of comparisons between elements.

    • Importance of understanding the comparisons when analyzing sorting algorithms.

Transition to Parallel Bubble Sort

  • Focus on modifications to the bubble sort algorithm for parallel execution.

  • Defined the phases of bubble sort:

    • Each phase represents iterations of the inner loop for a fixed outer loop index (i).

    • For the first phase, i starts at n-1 and j runs from 0 to n-2.

    • Demonstration through an example with numbers (e.g., 2, 4, 6, 8).

Parallelization Limitations

  • Parallel execution of elements requires caution:

    • Exchanging two elements at the same time introduces potential sequencing issues, possibly leading to incorrect sorting.

    • Elements must be compared and swapped sequentially during each phase to ensure correctness.

Phases of Parallel Bubble Sort

  • Each phase modifies only the two neighboring entries.

  • Emphasis on the fact that

    • Once an element's position is finalized in a phase, it doesn’t get modified in subsequent comparisons of that phase.

    • New phases can start once the current phase has moved forward past certain comparisons.

Implementation and Programming Considerations

  • Programming a phase for parallel processing can be straightforward:

    • Assign jobs based on the rank of the core (processor).

    • Ensuring that cores wait for their turn to avoid issues with synchronization.

  • Use of MPI (Message Passing Interface) for communication between cores:

    • One core informs the next when it has reached certain stages (checkpoints).

    • Utilizing send and receive messages to control the timing of execution across cores prevents race conditions.

Odd-Even Sort Introduction

  • Introduction to a new sorting algorithm: Odd-Even Sort.

  • This algorithm allows for more parallel operations than bubble sort.

    • Contains two main phases (odd phase and even phase).

    • Describes how different processors (e.g., P0, P1) can perform exchanges simultaneously, allowing for more efficiency than bubble sort.

Phases and Structure of Odd-Even Sort

  • Processors will alternate actions:

    • Phase 1: Processors compare and exchange based on odd and even indices.

    • Phase 2: Processors perform exchanges among neighboring elements.

    • Most cores will be utilized in both phases, significantly improving efficiency.

Complexity Assessments and Expectations

  • Worst-case scenario involves sorting in reverse order, requiring up to N-1 phases for completion to guarantee sorting.

  • Complexity determined by counting operations like swaps, with an eye on the phases executed.

  • Understanding impacts on scalability when increasing the number of processors used.

  • Perfect scalability is rarely achieved and requires careful mathematical and empirical analysis.Special classes for endpoint conditions to prevent errors during the communication process.

Key Insights for Upcoming Lab Assignment

  • Students will implement the Odd-Even Sort for their next lab assignment.

  • Emphasis on ensuring each sub-list is sorted locally before comparing boundary elements to maintain the overall sorted order.

  • Understanding and applying the definitions of sorting, particularly the sorted conditions across individual processors, is crucial.