1/118
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Quick Sort Pseudo Code
procedure QUICK_SORT(A)
pivot <- rand_int(0, n)
pivot_pos <- partition(A, pivot)
quick_sort(A[0;pivot_pos])
quick_sort(A[pivot_pos+1:n])
Partitioning Pseudo Code
procedure PARTITION(A, pivot) {
swap(A, pivot, 0)
pivot_val <- A[0]
low <- 1
high <- n-1
while low <= high do {
while A[low] <= pivot_val do {
low += 1
}
while A[high] > pivot_val do {
high -= 1
}
if high > low then {
swap(A, low, high)
}
}
swap(A, 0, high)
}
What is quickselect?
an algorithm for finding the kth largest item in a list. Commonly this is used to find the median, or the len(A)//2th largest element.
How is quickselect different than quicksort?
Uses the same idea as quick sort, but only needs to recurse on one side of the partition
What is expected runtime of quick select?
Linear time
Quick Select Pseudo Code
procedure QUICK_SELECT(A, k) {
pivot <- rand_int(0, n)
pivot_pos <- partition(A, pivot)
if pivot_pos == k-1 do {
return A[pivot_pos]
} else if pivot_pos > k-1 then {
return quick_select(A[0:pivot_pos], k)
} else {
return quick_select(A[pivot_pos+1:len(A)], k-pivot_pos)
}
}
what operators do we have for comparison model of computation?
>, <, <=, >=, !=, ==
How can any algorithm in the comparison model be visualized?
As a binary tree where the algorithm's run path is represented by a path from the root to a leaf node, with the leaf nodes being the output of the algorithm.
In the context of sorting, how many possible outputs or leaf nodes does an algorithm have?
An algorithm for sorting has n! possible outputs or leaf nodes, where n is the number of items being sorted.
What is the height of the binary tree for sorting algorithms?
The height of the tree is log2(n!)
What is a simpler upper bound for the height of the sorting binary tree?
An upper bound could be n^n instead of n!
How can we find a lower bound for the height of the sorting tree?
By noticing that at least n/2 items in n! are greater than or equal to n/2
What is the lower bound derived from the observation about n/2 items?
The lower bound is N/2 log(n/2)
What is the simplified form of the lower bound in big-O notation?
The lower bound is O(n log n)
what does log(n/2) break down into?
log(n/2) = log(n) - log(2)
What is the main question regarding the random access model of computation compared to the comparison model?
Can we do better than the comparison model using random access, similar to how we did with hashing vs. binary search?
What data structure is suggested for efficient sorting in the random access model?
A direct access array, which is a large array that can fit any key in the universe of keys (assumed to be positive integers).
How are keys inserted into the direct access array?
Keys are inserted at their respective indices in the direct access array.
What condition must be met for keys inserted in the direct access array?
The keys must be unique.
How can the sorted order be obtained from the direct access array?
By walking through every position in the array and collecting the non-null items.
What is the time complexity for sorting if u <= cn in the direct access model?
The sorting can be done in O(n) time
What if u <= n^2? How can the keys be handled?
Break down the key into two components, a and b, where both are <= n: a = k//n and b = k%n
How can the original key k be recovered from a and b?
The original key can be recovered using the formula k = an + b
Given the example [17, 3, 24, 22, 12], what would the tuples (a,b) be after breaking down the keys?
The tuples would be (3,2),(0,3),(4,4),(4,2),(2,2)
What base is used for the concatenated keys in the example, and what do the keys convert to?
The base used is n (base 5 in this case), converting the numbers to [32, 03, 44, 42, 22].
What is the method called when sorting items by one element of a tuple at a time?
Tuple sort.
Which sorting order is correct when sorting tuples?
The sort must be stable, meaning that elements with equal keys retain their relative positions.
What sorting algorithm can we use under the hood?
We can use any sorting algorithm, but now that each key is <= n we can use a direct access sorting
What is the main principle behind counting sort?
Counting sort combines direct access sorting with chaining to handle collisions while maintaining the input order.
How can collisions be managed in counting sort?
By using a dynamic array that appends colliding elements to the end, thus retaining the original order.
What is the first step in performing radix sort when u <= n^2?
Split the keys into two components, a and b, and then use counting sort to sort the items using tuple sort, first on b and then on a
How do the tuples relate to the base in the context of radix sort?
The concatenation of the tuples corresponds to the numbers expressed in base n.
What condition allows us to convert bases in radix sort?
If u <= n^c, we can convert bases and create tuples of length c.
what is the time complexity of radix sort when c is constant?
O(n)
counting sort Pseudo code
procedure COUNTING_SORT(A) {
u <- max([x.key for x in A])
D <- [[] for i in o..u]
for x in A do {
D[x.key].append(x)
}
i <- 0
for chain in D do {
for x in chaing do {
A[i++] <- x
}
}
}
Radix Sort Pseudo Code
procedure RADIX_SORT(A) {
u <- max([x.key for x in A])
c <- [logn(u)]
D <- [Obj() for i in 0..u]
for in in 0..n do {
D[i].digits <- digit_tuple(A[i], n, c)
D[i].item <- A[i]
}
for i in 0..c do {
for j in 0..n do {
D[j].key <- D[j].digits[i]
}
counting_sort(D)
}
for i in 0..n do {
A[i] <- D[i].item
}
}
Base digit tuple pseudo code
procedure DIGIT_TUPLE(x, n, c) {
high <- x.key
digits <- []
for i in 0..c do {
low <- high % n
high <- high // n
digits.append(low)
}
return(digits)
}
What are the properties of a binary tree
rooted tree,
each node has a left and right child and parent node (each of which may be null),
Each node has an item (value),
Currently, the order has nothing to do with the value
What is the parent node of the root node?
The root node has a null parent node.
What defines a leaf node in a tree?
A leaf node has all null children.
How is the depth of a node defined?
The depth of a node is the length from that node to the root node.
What does the height of a node represent?
The height of a node is the maximum length between that node and its furthest leaf node.
What is traversal order in the context of tree structures?
The traversal order is the sequence in which every subtree at the left pointer comes before the node's value, followed by everything in the right subtree.
How does the traversal order depend on tree properties?
The traversal order with respect to item values depends on the properties used when building, inserting, or deleting from the tree.
What is the traversal order of the first tree given in Lecture Note 7?
The current traversal order is G, D, B, E, A, C, F.
Traversal Order Pseudo Code
Procedure ITER(T) {
if T == null then {
return
}
iter(T.left)
visit(T)
iter(T.right)
}
How do you find the first element in a tree T?
If T has a left child, recursively return the find first of that subtree. Otherwise, T is the first element.
What is the first element of the tree rooted at A in Lecture Notes 7?
The first element is G, so first(A) = G
What is the first element of the tree rooted at C in Lecture Notes 7?
The first element is C, so first(C) = C
What is the time complexity of the find first operation?
The time complexity is O(h), where h is the height of the tree
Find First Pseudo Code
procedure FIRST(T) {
if T.left != null then
return first(T.left)
} else {
return T
}
How do you find the last element in a tree T?
If T has a right child, recursibely return the find last of that subtree. Otherwise T is the last element.
What is the last element of the tree rooted at A?
The last element is F, so last(A) = F
What is the time complexity of the find last operation?
The time complexity is O(h), where h is the height of the tree.
Find Last Pseudo Code
procedure LAST(T) {
if T.right != null then {
return first(T.right)
} else {
return T
}
}
How do you find the next element in a tree T?
If T has a right child, recursively return the find first of that subtree. Otherwise, go up the parent nodes until the node is not a right child of its parent. Return that parent node.
What is the next element after A?
The next element is C, so next(A) = C.
What is the next element after E?
The next element is A, so next(E) = A
What is the time complexity of the find next operation?
The time complexity is O(h), where h is the height of the tree.
Find next Pseudo Code
procedure NEXT(T) {
if T.right != null then {
return first(T.right)
} else {
while T.parent and T.parent.right == T do {
T <- T.parent
}
return T.parent
}
}
How do you find the previous element in a tree T?
If T has a left child, return the find last on that node. Otherwise, go up the parent nodes until the node is not a left child of its parent. Return that parent node.
What is the previous element before A?
The previous element is E, so previous(A) = E
What is the time complexity of the find previous operation?
The time complexity is O(h), where h is the height of the tree.
Find previous Pseudo Code
procedure PREVIOUS(T) {
if T.right != null then {
return last(T.left)
} else {
while T.parent and T.parent.left == T do {
T <- T.parent
}
return T.parent
}
}
What is the purpose of the insert_before(T, new_node) function in a binary tree?
It inserts a new node into the position that makes the new node the previous node of node T
What happens if T.left is null during the insert_before operation?
If T.left is null, then T.left is set to new_node, and new_node.parent is set to T.
How does the insert_before function handle the situation when T.left is not null?
If T.left is not null, the function sets T to the previous node of T and then assigns T.right to new_node, setting new_node.parent to T.
What is the time complexity of the insert_before operation?
The time complexity of the insert_before operation is O(h), where h is the height of the tree.
In the context of the insert_before function, what does "previous(T)" refer to?
"Previous(T)" refers to the node that comes before node T in the binary tree structure.
What relationships are established when a new node is inserted before node T?
The new node's parent is set to T, and it becomes the left or right child of the previous node of T, depending on the structure of the tree.
Insert Before Pseudo Code
procedure INSERT_BEFORE(T, new_node) {
if T.left == null then {
T.left <- new_node
new_node.parent = T
} else {
T <- previous(T)
T.right <- new_node
new_node.parent <- T
}
}
What does insert_after(T, new_node) do?
insert a new node into the position that will make new node be the next of node T
What is the time complexity of insert_after?
O(h) where h is the height of the tree
Insert After Pseudo Code
procedure INSERT_AFTER(T, new_node) {
if T.right == null then {
T.right <- new_node
new_node.parent = T
} else{
T <- next(T)
T.left <- new_node
new_node.parent <- T
}
}
What is delete(T)?
delete node T
what is the time complexity of delete?
O(h) where h is the height of the tree
Delete Pseudo Code
procedure DELETE(T) {
if T.left == null and T.right == null then {
if T.parent.left == T then {
T.parent.left <- null
} else {
T.parent.right <- null
}
}
if T.left then {
X <- previous(T)
T.item = X.item
} else {
X <- next(T)
T.item = X.item
}
delete(X)
}
How is the order of items in a binary tree-backed sequence determined?
The order is determined by the sequence order dictated by operations like insert_at, delete, and insert_last, rather than by increasing value.
Why is the insert_at(x, i) operation in a binary tree potentially more efficient than in other data structures?
Because insert_at(x, i) can be O(h) in a binary tree, whereas in other structures, it has typically been linear time (O(n)).
What bookkeeping is required to efficiently find the position of index i in a binary tree-backed sequence?
Every node must keep track of the size of the subtree rooted at that node, denoted as node.size.
How is the size of subtrees updated during insertions and deletions?
The size of the subtrees is updated by going up to the parent nodes and adding or subtracting one depending on whether a node was inserted or deleted, which takes O(h).
How do you find the element at index i in a binary tree-backed sequence?
Check node.left.size vs i.
If node.left.size == i, return the node.
If node.left.size > i, recurse on node.left with index i.
If node.left.size < i, recurse on node.right with index i - node.left.size - 1.
What is the runtime for retrieving an element at index i using the get(i) function?
The runtime for get(i) is O(h), where h is the height of the tree.
What is the overall runtime for performing both get(i) and insert_at(x, i) operations in a binary tree-backed sequence?
The overall runtime is O(2h), which simplifies to O(h).
Get itam at index
procedure GET(T, i) {
if T.left.size == i then {
return T
} else if T.left.size > i then {
return get(T.left, i)
} else {
return get(T.right, i - T.left.size -1)
}
}
What is the height of a balanced binary tree?
O(log n)
Name some types of balanced binary trees.
Red-black trees, splay-trees, 2-3 trees, AVL trees.
Which type of tree is popular for database construction but is not binary?
B-trees.
What was the first proposed balanced binary tree and when was it introduced?
The AVL tree (Adelson-Velsky and Landis, 1962).
When is a tree considered height balanced?
When the height of the root's left child and right child differ by at most 1.
How is the skew of a node calculated?
Skew of a node = node.right.height - node.left.height.
What does it mean for a tree to have a height of O(log n)?
It means the tree's height grows logarithmically with the number of nodes, ensuring efficient operations like search, insert, and delete.
Why are balanced binary trees important in computer science?
They ensure efficient operations such as searching, inserting, and deleting elements by keeping the tree's height logarithmic.
What do rotations change in a binary tree?
Rotations change the structure of the tree without changing its traversal order.
How many pointer changes does it take to perform a tree rotation?
O(1) pointer changes.
What is the impact of rotations on the height of a tree?
Rotations can either increase or decrease the tree height by +1, 0, or -1.
What does inserting or deleting a node do to the height of ancestor nodes?
It changes the height of the ancestor nodes, requiring us to increment or decrement their height.
How much additional work does insertion or deletion add, due to rebalancing?
O(log n) work.
By how much does the skew of a parent node change when a child node is inserted or deleted?
The skew changes by 1.
What must be done if a parent's skew becomes +2 or -2 after insertion or deletion?
The subtree must be rebalanced.