1/12
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What is required to prove the correctness of an algorithm?
There are three required things to prove an algorithm is correct:
Prove that the algorithm always terminates.
Prove that the algorithm produces the correct results after it erminates, using:
A loop invariant, the property that is true at each iteration
What is the loop invariant of binary search?
The general idea in binary search is to find a midpoint between lo and hi pointers in a list.
If mid and lo ever point to the same item, it may result in an infinite loop. Thus we must ensure there is always at least 1 element between lo and hi, i.e., lo < hi - 1.
Specifically, loop invariant: the key is in array[0...n] if and only if key is in array[ lo...hi-1 ]
What is a stable sorting algorithm?
A sorting algorithm is considered stable if it maintains the relative order of elements that have equal keys
Describe the process of selection sort
What is it's time complexity?
What is the loop invariant?
Selection sort considers a list as being divided into two parts:
- The first half is the part that is already sorted
- The second half is the part still to be sorted.
Initially, the first part is empty as we have not sorted any elements yet.
Then, search the unsorted part for the smallest element, insert it at the end of the unsorted part. The sorted part is now one element longer.
Invariant
First, some observations:
array[1...i-1] is the sorted part of the list
array[i...n] is the unsorted part of the list
Invariant 1: array[1...i-1] is sorted.
Invariant 2: No element in array[i...n] is less than any element in array[1...i-1].
Initial state:
The loop invariant is that at the beginning of each iteration, array[1...i-1] is sorted
Maintainance:
1. At each iteration, we seek the minimum element of array[i...n]. Since we know that this minimum element of array[i...n] is not smaller than any element in array[1...i-1], then swapping array[i] with the minimum of array[i...n] maintains the first invariant.
2. Since we are taking the minimum element of the array[i...n], it must be true that after swapping array[i] and this minimum element, the min element is less than array[i + 1 ... n].
Termination:
When the i loop terminates, the invariant still holds, and since it was maintained when i=n, this implies that array[1...n] is sorted.
Complexity:
Selection sort iterates N times to find the minimum element and swap it. To find the minimum, this could take O(N) time to check each element. This means doing N operations N times, so O(N²)
What is the lower bound complexity of comparison based sorting algorithms?
O(N log(N))
Examples of comparison based sorting algorithms:
Selection sort O(N²)
Insertion sort O(N²)
Merge sort O(N log(N))
What is the lower bound complexity of non-comparison based sorting algorithms?
O(N)
Examples of O(N) non-comparison sorting algorithms:
Counting sort
Radix sort
Describe the counting sort algorithm
Counting sort works by counting the number of occurences of each element in the list.
For example, in a list of integers [0, 1, 1, 2, 2, 2, 3]
0: occurs once
1: occurs twice
2: occurs thrice
3: occurs once
So we create a count array, count = [0, 0, 0, 0] where count[i] is the number of occurences of i in our input.
Reconstructing the array in sorted order after counting the occurrences of each element is as simple as adding count[i] i's, for each i.
What is the time complexity of counting sort?
Counting sort is O(N + U), where U is the range of elements in the input. For example, [0, 1, 1, 2, 2, 2, 3] has a range of 4. But if we consider an input of [0, 1, 2, 1000], our range is 0 to 1000.
If U is less than or equal to N, then we can consider counting sort O(N). If not, it is bound to O(N + U).
What are the implications of choosing counting sort over selection sort?
Counting sort has the potential to have much better time complexity O(N + U), or hopefully even O(N). Where the complexity of selection sort of O(N²)
Selection sort, however is in-place - meaning it has O(1) auxiliary space complexity.
The cost saved in time complexity is replaced by additional space requirements.
Is counting sort a stable algorithm?
Counting sort can be made stable, at additional space complexity costs by storing the occurences of elements as a list of linked lists. This preserves the order of elements with equal keys, and a sorted list can be reconstructed with this order in mind.
Describe the radix sort algorithm
Given a list of integers with d-digits, sort the elements by each digit one at a time, from right-most to left-most digits.
For example,
2 1 6
2 5 3
6 4 1
Sorting based on the last digit (6, 3, 1):
6 4 1
2 5 3
2 1 6
... and so on for each digit
How the integers at each digit are sorted determines whether radix sort is stable, and can impact it's complexity.
What is the time complexity of radix sort when using a stable counting sort?
Consider an input of N integers in base 10, with k-digits.
The complexity of counting sort is O(N + U)
If we have k-digits, the complexity will be:
O(k (N + U)). But since we only use counting sort on single-digit integers at each iteration of radix sort and all integers are in base 10, we know U is bound to 10, which is constant.
O(k N)
What is the time complexity of radix sort on a list of N integers with an unknown base, and k-digits. Assume you use counting sort to sort each digit.
The complexity of counting sort is O(N + U), but in our case we know that U is bound to b, so equivalently O(N + b)
Radis sort performs k iterations over each digit, we have a total complexity of:
O(k (N + b)).
We can make counting O(N) by choosing a base b, such that b is also O(N). Suppose b = N, we have:
O(log_n(M) x N)
Note: As the base b increases, the number of counting sorts decreases. But as b decreases, the cost of counting sort goes up.