1/39
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
a rotation is like what
rotate towards lighter side, work on heavier side, the lower node hands a node to higher node, lower node goes higher
a self-balancing binary tree
when the tree is unbalanced, we rotate it
a valid AVL tree has for each node, the difference between its left and right subtree cannot exceed 1
empty tree has a height of -1
for each node, its height = max of r’s left and right subtree + 1
AVL tree
whats the best case for the height of an AVL tree
O(logn)
whats the best case for the height of an AVL tree
O(logn)
rotations for an AVL tree because you’re updating pointers is
O(1)
(T/F) The lower bound for sorting is Ω(nlogn)
True
Search time for a hash table is
O(1)
a way to handle collisions in a table (two different keys are assigned the same index)
“stack” the elements in the same index of the hash table
essientially a list of linked lists
separate chaining
a way to handle collisions in a table (two different keys are assigned the same index)
find a different location to insert
open addressing
want to keep load factor (amount of elements / table size) low so that
there are no collisions
what is the expected insert/find runtime for separate chaining
O(1) bc hash/mod division is constant and don’t expect many collisions
check spot if empty, linear search from that spot until find empty spot or until find key
linear probing
what is the expected insert/find runtime for linear probing and why?
O(1) bc hash/mod division is constant and don’t expect many collisions
for AVL trees vs Hashing, what are the positives and negatives of each?
AVL trees: + n amount of nodes, - O(logn) lookup
Hashing: + O(1) lookup, - table size is bigger than # items
for open addressing vs chaining, what are the positives and negatives of each?
open addressing: + choices to handle collisions, - more difficult to implement
separate chaining: + easy to handle collisions/maintain, - not many options to handle collisions
maintaining a large table, only using a few elements
has a hash function that takes a key and returns some integer
insert/find an entry
hashing
a 1D array of key, priority value pairs
organize the elements by priority value
implement as a binary min heap (each node <= children)
1D array emulates a binary tree
left child = 2 * i
right child = 2 * i + 1
parent is i // 2
priority queue
how does push_back work in priority queue
push element pack in heap array and bubble it up if needed
how does pop_front work in priority queue
replace first node with last node, pop the last node, bubble down if needed
what is the run time for push_back in priority queue
O(logn) bc in worst case bubble up all the way to root of a balanced binary tree
what are two ways to update an existing element’s priority queue?
linearly searching O(n) the array for the element and then bubbling up if needed O(logn) => O(n)
key to index unordered map, O(1) lookup and then bubbling if needed O(logn) => O(logn)
what is the run time for pop_front() in priority queue
O(logn) bc in worst case bubble down all the way to leaf of a balanced binary tree
like a network
has verticies => nodes
has edges => a pair of verticies
in directed graph, order of pairs matter
need:
current node id
adjacency list/matrix (know our connections)
matrix: rows are src node, cols are destination node (T/F in each field)
list: each index represents a list of destinations for that specific node
if graph is dense, matrix is better
visited boolean array/map
predecesser array/map (optional if want to output path)
graph
whats the runtime for a sparse graph? |E| = …
O(|V|), linear amount of edges
what’s the runtime for a dense graph? |E| = …
O(|V|²), max amount of edges
Basic graph traversal
recursive
traverse the graph as deep as possible, when a dead end is reached, we back up to the previous vertex and try a different path
does not give optimal path
DFS
whats the runtime for DFS
O(|E|) bc each edge is processed
simple graph traversal
always gives the best path
not recursive
use a FIFO queue to maintain the vertices
BFS
whats the runtime for BFS
O(|E|) bc each edge is processed
greedy algorithm
finds the path with minimum weight from a start node to all nodes in a weighted graph
similar to BFS except a priority queue is used to maintain the nodes not a FIFO
no solution if a negative cycle is present
does not work on negative edges
dijkstra’s algorithm
what are the different ways to do dijkastra’s algorithm?
unsorted array/map as the PQ, binary min heap as the PQ, fibonacci heap as the PQ
whats the runtime for using an unsorted array/map as a PQ for Dijkstra’s algorithm?
O(|V|²)
whats the runtime for using a binary min heap as a PQ for Dijkstra’s algorithm in general?
O(|E|log|V|)
whats the runtime for using a binary min heap as a PQ for Dijkstra’s algorithm with a sparse graph?
O(|V|log|V|)
whats the runtime for using a binary min heap as a PQ for Dijkstra’s algorithm with a dense graph?
O(|V|²log|V|)
whats the runtime for using a fibonacci heap as a PQ for Dijkstra’s algorithm in general?
O(|E| + |V|log|V|)
whats the runtime for using a fibonacci heap as a PQ for Dijkstra’s algorithm in sparse graphs?
O(|V|log|V|)
whats the runtime for using a fibonacci heap as a PQ for Dijkstra’s algorithm in dense graphs?
O(|V|²)
greedy algorithmn
is a minimum spanning tree algorithmn
spanning tree = maintains the minimum # of edges such that all the nodes are connected
uses a priority queue and a union-find data structure
maintain a predecessor array to determine each node’s leader
does not guarantee a minimum path weight
insert each edge into the PQ
while the PQ is not empty, pop if off the family, if they are not in the same family, marry them, and add them to minimum spanning tree
Kruskal’s algorithmn
what is the runtime for Kruskal’s algorithm?
O(|E|log|E|)