1/16
Midterms
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Bubble Sort
It is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in the wrong order.
Ο(n2) - Bubble Sort
(Complexity)
Bubble Sort Algorithm is not suitable for large data sets as its average and worst-case complexity are _ where n is the number of items
Selection Sort
It sorts an array by repeatedly finding the minimum element considering ascending order from the unsorted part and putting it at the beginning
The algorithm maintains two subarrays in a given array.
The subarray is already sorted
The remaining subarray is unsorted
In every iteration, the minimum element (considering ascending order) from the unsorted subarray is picked and moved to the sorted subarray
Ο(n2) - Selection Sort
(Complexity)
Selection sort is not suitable for large data sets as it is average and worst-case complexities are _, where n is the number of items
Insertion Sort
It is an in-place comparison-based sorting algorithm, Here, a sub-list is maintained which is always sorted. For example, the lower part of an array is maintained to be sorted. An element that is to be inserted in this sorted sub-list has to find its appropriate place and then it has to be inserted there.
The array is searched sequentially and unsorted items are moved and inserted into the sorted sub-list in the same array.
Ο(n2) - Insertion Sort
(Complexity)
Insertion Sort is not suitable for large data sets as its average and worst-case complexity are of _, where n is the number of items
Shell Sort
It is a highly efficient sorting algorithm and is based on the insertion sort algorithm. This algorithm avoids large shifts as in the case of insertion sort if the smaller value is to the far right and has to be moved to the far left. It uses insertion sort on widely spread elements, first to sort them and then sorts the less widely spaced elements. The spacing is termed as an interval. The interval is calculated based o the Knuth’s formula
h = h* 3 + 1
The Knuth’s Formula
where h is interval with initial value 1
O(n) - Shell Sort
(Complexity)
Shell Sort is quite efficient for medium-sized data sets as its average and worst-case complexity of this algorithm depends on the gap sequence best known as _, where n is the number of items.
Heap Sort
It is a comparison-based sorting algorithm. An improved version of selection sort. Like selection sort, it divides its input into a sorted and an unsorted region, and its iteratively shrinks the unsorted region by extracting the largest element from it and inserting it into the sorted region.
Unlike, selection sort, it does not waste time with linear-time scan of the unsorted region; rather, it maintains the unsorted region in a heap data structure to more quickly find the largest element in each step
O(nlogn) - Heap Sort
(Complexity)
Heap sort is somewhat slower in practice on most machines than quicksort but is has the advantage of a more favorable worst-case _ runtime
Merge Sort
It is a divide and conquer algorithm. It divides the input array into two halves, calls itself for the two halves, and then merges the two sorted halves. It uses a function to put together two halves. The merge(arr, I, m, r) is a key process that assumes that arr[I…m] and arr[m+1…r] are sorted and merges the two sorted sub-arrays into one.
O(nLogn) - Merge Sort
(Complexity)
The time complexity of Merge Sort is _ in all 3 cases; worst, average, and best as merge sort always divides the array into two halves and takes linear time to merge two halves.
Quick Sort
It is a divide and conquer algorithm. It picks an element as a pivot and partitions the given array around the picked pivot. There are many different versions of this algorithm that pick pivot in different ways.
The key process in this algorithm is a partition(). The target of partitions is, given an array and an element x of an array as the pivot, put x at its correct position in a sorted array and put all smaller elements (smaller than x) before x, and put all greater elements (greater than x) after x. All this should be done in linear time.
O(nLogn) - Quick Sort
(Complexity)
In Quick Sort, we can get an idea of average case by considering the case when partition, for instance, O(n/9) elements in one set and O(9n/10) elements in another set.
T(n) = T(n/9) + T(9n/10) + (n) = _
Bin/Bucket Sort
It runs in linear time on average. Like counting sort, this algorithm is fast because it considers something about the input. This algorithm considers that the input is generated by a random process that distributes elements uniformly over the interval μ=[0,1].
Radix Sort
It is one of the sorting algorithms used to sort a list of integer numbers in order. In this algorithm, a list of integer numbers is sorted based on the digits of individual numbers. Sorting is performed from the least significant digit to the most significant digit
This algorithm requires the number of passes that are equal to the number of digits present in the largest number among the list of numbers.
Example:
If the largest number is a 3 digit number then that list is sorted with 3 passes.