The guide covers various methods, including Bubble, Selection, Insertion, Merge, and Quick Sort, highlighting their strengths, weaknesses, and best use cases in data structures.
Comparison and Swapping: Iteratively compares adjacent elements in the list, swapping them if the first element is greater than the second.
Passes Through the List: After each pass, the largest unsorted element "bubbles up" to its correct position at the end of the list.
Gradual Reduction: With each subsequent pass, the number of elements to compare decreases as the last elements are already sorted.
Best, Average, and Worst Case: The best case (already sorted) is O(n); average and worst cases (reverse order) are O(n^2).
Simplicity Over Efficiency: Easy to understand and implement but inefficient for large datasets due to repeated comparisons and swaps.
Find Minimum: In each pass, it identifies the smallest (or largest, for descending order) element from the unsorted portion of the list.
Swap with First Unsorted: Swaps this smallest element with the first element of the unsorted portion.
Sorted/Unsorted Division: Expands the sorted portion of the list one element at a time by moving the boundary after each swap.
Time Complexity: Performs O(n^2) comparisons in all cases (best, average, and worst), making it inefficient for large datasets.
Space Efficiency: It is an in-place algorithm requiring no additional memory, making it suitable for memory-constrained environments.
Iterative Insertion: Processes elements one at a time, taking each element and inserting it into its correct position in the sorted portion of the list.
Shifting Elements: During insertion, shifts larger elements in the sorted portion one position to the right to make space for the current element.
Best Case Performance: When the list is already sorted, it performs O(n) comparisons, making it efficient for nearly sorted data.
Average and Worst Case: Requires O(n^2) time for random or reverse-ordered data due to nested iterations.
In-Place and Stable: Operates directly on the input list without extra memory, preserving the relative order of equal elements (stability).
Divide and Conquer: Recursively divide the list into two halves until each sublist contains a single element.
Merge Process: Combines two sorted sublists into a single sorted list by comparing elements and arranging them in order.
Time Complexity: Efficient with a guaranteed O(nlogn) time complexity in all cases (best, average, and worst).
Space Complexity: Requires O(n) additional space for temporary storage during merging, making it less space-efficient than in-place algorithms.
Stability: Maintains the relative order of equal elements, making it a stable sorting algorithm.
Partitioning: Selects a pivot element and rearranges the list so that all elements less than the pivot are on its left, and all greater elements are on its right.
Recursive Sorting: Recursively applies the partitioning process to the sublists on the left and right of the pivot.
Time Complexity: Has an average and best-case time complexity of O(nlogn), but the worst case (e.g., if the pivot is the smallest or largest element) is O(n^2).
In-Place Sorting: Requires minimal extra memory, making it more space-efficient than merge sort.
Unstable: Does not necessarily preserve the relative order of equal elements, making it an unstable sorting algorithm.