1/36
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Greedy Scheduling (Activity Selection) | Problem
Given n events with start/end times, select the maximum number of non-overlapping events (max compatible set).
Greedy Scheduling (Activity Selection) | Algorithm
Sort events by increasing finish time; pick the earliest-finishing event; then repeatedly pick the next event whose start ≥ finish of the last picked event.
Greedy Scheduling (Activity Selection) | Proof idea (why optimal)
Exchange argument: assume OPT has more events than greedy. Let p be first position where they differ; greedy's i_p finishes earlier than OPT's j_p, so swap i_p into OPT and keep the rest (still feasible, same count). Repeat swaps until OPT starts with greedy's whole schedule; then OPT can't have extra events after greedy's last event or greedy would have taken one → contradiction.
Greedy Scheduling (Activity Selection) | Time complexity
O(n log n) due to sorting; if already sorted by finish time, one pass O(n).
Interval Partitioning (Min Classrooms) | Problem
Given intervals (classes), schedule all with the minimum number of rooms.
Interval Partitioning (Min Classrooms) | Algorithm
Sort classes by start time; maintain a min-heap of rooms keyed by current end time; for each class (s,e): if earliest-ending room ends ≤ s, reuse that room (update its end to e); else create a new room; answer = max heap size / total rooms created.
Interval Partitioning (Min Classrooms) | Proof idea (why optimal)
Lower bound: any schedule needs at least "depth" = max number of overlapping intervals at one time. Greedy creates a new room exactly when all existing rooms overlap the new interval, meaning depth ≥ #rooms. Since greedy uses #rooms and any solution needs ≥ depth ≥ #rooms, greedy is optimal.
Interval Partitioning (Min Classrooms) | Time complexity
O(n log n): sort + each interval does heap ops (log n).
Huffman Coding | Problem
Given symbol frequencies f(·), build a prefix-free binary code minimizing total cost Σ f(a)·codeLen(a) (equivalently expected bits).
Huffman Coding | Algorithm
Compute frequencies; put symbols in a min-heap by freq; while ≥2 items: extract two min x,y; merge into z with f(z)=f(x)+f(y); insert z back; finally assign 0/1 on left/right edges; codeword = root-to-leaf bits.
Huffman Coding | Proof idea (why optimal)
(1) Sibling lemma: two least-frequent symbols can be made sibling leaves at maximum depth in some optimal tree (swap argument). (2) Merge them into one symbol z (freq f(x)+f(y)) reducing alphabet size by 1; by induction Huffman is optimal on the reduced alphabet; expand z back into x,y. Cost relation: Cost(Σ)=Cost(Σ′)+f(x)+f(y), and OPT has the same form, so Huffman matches OPT.
Huffman Coding | Time complexity
Let m=|Σ| distinct symbols: building/using heap gives O(m log m) merges; plus counting frequencies O(N) over message length N.
MST (Minimum Spanning Tree) | Problem
Given weighted undirected graph, find a spanning tree with minimum total edge weight.
MST | Key definitions
Cut (S, V−S) partitions vertices; an edge crosses the cut if endpoints are on opposite sides; a cut respects A if no edge in A crosses it.
MST | Safe-edge theorem (statement)
If A is contained in some MST and a cut respects A, then the minimum-weight edge crossing that cut is safe to add (A ∪ {e} is still contained in some MST).
MST | Safe-edge theorem (proof idea)
Let T be an MST containing A. If min crossing edge e is already in T, done. Else add e to T → forms a cycle; that cycle must include another edge f crossing the same cut (and f ∉ A since cut respects A). Replace: T′=T∪{e}−{f}. Since w(e)≤w(f), T′ is no heavier → still an MST and now contains A∪{e}.
Prim's Algorithm | Problem
Compute an MST by growing one tree from a start vertex.
Prim's Algorithm | Algorithm
Start root r; maintain key[v]=cheapest edge weight connecting v to current tree, prev[v]=parent; put all vertices in min-PQ by key; while PQ not empty: u=Extract-Min; for each neighbor v still in PQ: if w(u,v)
Prim's Algorithm | Proof idea (why works)
At each step, tree vertices are S; the cut is (S, V−S), which respects current tree edges. Prim chooses the lightest edge crossing that cut (via minimum key), which is safe by the safe-edge theorem → repeating yields an MST.
Prim's Algorithm | Time complexity
With binary heap PQ: O(|E| log |V|).
Kruskal's Algorithm | Problem
Compute an MST by adding edges from lightest to heaviest without forming cycles.
Kruskal's Algorithm | Algorithm
Sort edges by nondecreasing weight; initialize union-find with Make-Set(v) for all v; scan edges (u,v) in sorted order: if Find(u)≠Find(v), add edge and Union(u,v); stop after |V|−1 edges.
Kruskal's Algorithm | Proof idea (why works)
At any step, chosen edges A form a forest (components). Consider a cut that separates two components you're about to connect; that cut respects A. Kruskal's next chosen edge is the minimum edge connecting two different components, i.e., the lightest edge crossing some respecting cut → safe-edge theorem → can be in an MST; repeating yields an MST.
Kruskal's Algorithm | Time complexity
O(|E| log |E|) for sorting + near-linear union-find overhead (often written O(|E| log |V|)).
Shortest Paths | Edge relaxation (definition)
For edge (X,Y) with weight wt(X,Y): if sp(Y) > sp(X)+wt(X,Y), then set sp(Y)=sp(X)+wt(X,Y) (and update prev[Y]).
Bellman-Ford | Problem
Single-source shortest paths with possible negative edges; detects reachable negative cycles.
Bellman-Ford | Algorithm
Init sp[s]=0, others=∞; repeat |V|−1 times: for each edge (u,v), relax(u,v); then do one more full pass—if any sp decreases, a negative cycle is reachable; else sp are shortest distances.
Bellman-Ford | Proof idea (why works)
Induction on #edges: After i full passes, sp[v] equals the shortest s→v path that uses at most i edges. Base i=0: only s has 0. Step: relaxing all edges extends best paths by one more edge. After |V|−1 passes, all simple shortest paths are covered.
Bellman-Ford | Time complexity
O(|V|·|E|).
Dijkstra | Problem
Single-source shortest paths when all edge weights are non-negative.
Dijkstra | Algorithm
Init sp[s]=0, others=∞; put all vertices in min-PQ by sp; SET empty; while PQ not empty: u=Delete-Min; add u to SET (u finalized); for each neighbor v not finalized: relax(u,v) and Decrease-Key(v) in PQ.
Dijkstra | Proof idea (why works)
Contradiction: let u be first extracted with wrong distance. On a true shortest path to u, let x be first vertex not yet extracted, with predecessor y already extracted. Relaxing (y,x) ensures sp[x] ≤ dist(s,x) ≤ dist(s,u) < sp[u], so sp[x] < sp[u] while x still in PQ—impossible since u was Delete-Min.
Dijkstra | Time complexity
O((|V|+|E|) log |V|) with a heap priority queue.
Floyd-Warshall | Problem
All-pairs shortest paths (every i→j) via dynamic programming.
Floyd-Warshall | Algorithm
Number vertices 1..n; start with matrix E^(0)=adjacency (0 on diagonal, edge weights, ∞ if no edge). For k=1..n: for all x,y: E^(k)[x,y]=min(E^(k−1)[x,y], E^(k−1)[x,k]+E^(k−1)[k,y]). Return E^(n).
Floyd-Warshall | Proof idea (why works)
DP/induction on k: after iteration k, E^(k)[x,y] is the shortest path from x to y whose intermediate vertices are only from {1..k}. Either best path doesn't use k (keep old) or uses k (split through k).
Floyd-Warshall | Time complexity
O(|V|^3) time and O(|V|^2) space.