CSI 3105 - Midterm II

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/28

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

29 Terms

1
New cards

Making Change

greedy algorithm, for a given amount of money, this algorithm finds the smallest combination of Canadian coins to equal that amount by adding the most of the largest coin without going over, then the second-largest, and so on until the amount matches. Could be O(number of coins in set) if implemented similarly to Indiscrete Knapsack.

2
New cards

Indiscrete Knapsack

greedy algorithm, for a given collection of items each with a set value and weight and a given maximum weight, this algorithm finds the portion from 0-1 of each item necessary to get the maximum value possible without going over the maximum weight. it does this ratio of value to weight for each item and then taking the all of the item with the highest ratio, then the next, and the next until there is an item that can only be partially brought, and every other item is left behind. O(nlogn), which is entirely reliant on sorting the ratios

3
New cards

kruskal’s algorithm

greedy algorithm, for a given undirect, connected, and weighted graph, this algorithm finds a minimum spanning tree by sorting all of the edges by weight, putting each vertex into its own set, and then combining sets that are connected by the smallest edge, taking care that if an edge has both endpoints in the same set already, it is not added to the minimum spanning tree to prevent a cycle from being created. O(mlogn)

4
New cards

prim-jarnik algorithm

greedy algorithm, for a given undirected, connected, and weighted graph, this algorithm finds a minimum spanning tree by choosing an arbitrary vertex to start, adding it into a set that will contain all of the vertices added into the minimum spanning tree. for each vertex y, the algorithm establishes 3 values; minweight(y) which contains the weight of the smallest edge connecting y to A and is infinity if there isn’t one, closest(y) which stores the vertex on the other end of the minweight(y) edge or nil if there is none, and a boolean to check if y is in the minimum spanning tree already. for each y where edge (r,y) exists, these values are properly set. the algorithm then creates a minheap Q to store all vertices not in the minimum spanning tree, using minweight(y) as the key, and then systematically removes the top vertex from Q and adds it to the minimum spanning tree set, storing its edge in a set to return, and updating its neighboring vertices’ minweight() and closest()s. O(mlogn)

5
New cards

Longest Common Subsequence

dynamic algorithm, for two given sequences, this algorithm finds the length of their longest common subsequence (LCS) by creating a matrix A where the entry for A[i,j] is the length of the LCS for the first sequence up to its ith element and the second up to its jth, with A[n,m] containing the actual length of the LCS. The algorithm starts by setting every A[i,j] where i or j = 0 as 0, then iterates through every j for each i. if the ith element and the jth element are the same, that entry is equal to A[i-1, j-1] +1 as that element is in the subsequence for certain, otherwise it will be the maximum between A[i-1, j] and A[i, j-1], because that element may still be in the subsequence. O(mn)

6
New cards

How to find the Longest Common Subsequence from the matrix

Start by considering A[n,m]. If either A[n-1,m] or A[n, m-1] has an equal entry, move to consider the equal entry. If not, add the nth element of the first sequence to the end of the return sequence and then move to consider A[n-1, m-1]. Continue until the current entry being considered is 0.

7
New cards

Matrix Chain Multiplication

dynamic algorithm, for a given list of 2D matrices x1,…, xn where xi has the dimensions pi-1 x pi, this algorithm finds the cheapest way to multiply all the matrices together by finding the optimal points to split the list into two factors. It stores the cost and the split points in a matrix M where M[i,j] is the position where matrices xi…xj have been multiplied together. Position M[1,n] will store the cheapest cost for all of the matrices, as well as the final split point k. The algorithm starts by setting the values for every position M[i,j] where i=j, setting the cost and k as 0, as there is no cost or split point needed to simply have one matrix. The algorithm then sets the values for every position where j=i+1, and then j=i+2, always stopping when j is larger than n. for every position where i≠j, the cost will be the minimum of, for every i<=k<j , M[i,k] + M[k+1,j] + (pi-1*pk*pj), and the k will be the optimum split point that provides the minimum cost. O(n3)

8
New cards

How to find the split points for Matrix Chain Multiplication

For each entry M[i,j], its next split points are the k’s at M[i, j-k] and M[i+k, j]. Start by considering the k at M[1,n] as the first split point, and find the next split points until k=0 for all of them.

9
New cards

Discrete Knapsack

dynamic algorithm, for a given bag of items o1,…, on, each oi with a value vi and a weight wi, and a given maximum weight, this algorithm returns a list denoting what items can be packed to create a bag of maximum value within the maximum weight. Items can either be packed entirely or not at all. The algorithm creates a matrix K, where K[i,j] stores the maximum value for if only up to the ith item is considered and the maximum weight is j, with K[n,W] being the maximum value total. To start, the algorithm sets all entries where there are no items (i=0) or no weight (j=0) as 0. Then, iterating through every weight j for each sublist of i, the algorithm checks if the ith item is too heavy on its own to be packed (wi > j). If it is, the item is not brought at all and this entry will be equal to the same bag without the item, K[i-1,j]. If the item is not immediately apparent to be too heavy, its entry will be set as the maximum between K[i-1,j], where the ith item isn’t brought, and K[i-1, j-wi] + vi, where the ith item is brought. O(nW)

10
New cards

How to find the items packed in the Discrete Knapsack

Start by considering K[n,W]. If for the considered entry, K[i-1,j] has the same entry, the ith item was not included and you must now consider the entry K[i-1, j]. Otherwise, the ith item is included and you must now consider the entry K[i-1, j-wi]. Continue until the entry being considered has a value of 0.

11
New cards

Floyd-Marshall/All Pairs Shortest Path

dynamic algorithm, for a given weighted and directed graph, this algorithm finds the length of the shortest path between all pairs of vertices. The algorithm uses a 3D matrix F to store the lengths. For the entry F[i,j,k], the value is the length of the shortest path between vertices i and j, allowing the use of vertices up to and including vertex k. The lengths of the complete shortest path are stored at F[i,j,n]. To start, the algorithm sets every entry of k=0 where i=j to 0, as no travel is needed. Still at k=0, every other entry is set to infinity unless a direct edge exists between i and j, in which case the entry is the weight of that edge. Then, for each k<=n, the value of F[i,j,k] is set to the minimum between F[i,j,k-1], where the new vertex k didn’t improve the path, and the sum of F[i,k,k-1] + F[k,j,k-1], where the combined weight of the shortest paths between (i,k) and (k,j) is less than that of (i,j) with k-1. O(n3)

12
New cards

How to find the set of vertices in the shortest path found by Floyd-Marshall

For an entry F[i,j,k], first add i and j to the set of elements in the path, and then consider F[i,j,k-1]. If the value changes, add k to the set and add the sets for the entries F[i,k,k-1] and F[k,j,k-1]. If the value doesn’t change, add nothing and continue to the k-2 level. Continue this process until k=0.

13
New cards

What is a greedy algorithm

An algorithm that makes decisions for a solution based on what looks best in the moment. To be correct, locally-optimal in the problem must also mean globally optimal.

14
New cards

How many edges are in a tree of n vertices

n-1 edges

15
New cards

What is a spanning tree

A subgraph of a given graph that contains all the vertices while also being a tree

16
New cards

What is a minimum spanning tree

A spanning tree with the smallest combined weight of edges possible

17
New cards

Fundamental Lemma of Minimal Spanning Trees

For an undirected, connected, and weighted graph, when the vertices are split into two groups, the smallest edge connecting the two groups must be a part of the graph’s minmal spanning tree

18
New cards

What is Union-Find Sequence

A sequence of n-1 union operations and m find operations where union combines two sets and find(x) finds the name of the set x is stored in. When sets are stored as a doubly linked list, with the head as the set name, the next pointer pointing to the next node, and the back pointer pointing to the head, union takes O(p) where p is the length of the smaller set, and find(x) takes O(1). With this set-up, the Union-Find sequence takes O(m+nlogn)

19
New cards

Why is logm O(logn) in an undirected graph

In these graphs, m <= (n(n-1))/2 so m is O(n²). Therefore logm is O(log(n²)), and by the logarithmic rule logcab=b*logca, logm is O(logn)

20
New cards

Why is it important to remember to write deg(v) and not m where applicable

The handshaking lemma may allow you to write the runtime as O(m) instead of O(nm)

21
New cards

Handshaking Lemma

v∈V deg(v) = 2m

22
New cards

What is the fundamental difference in approaches between Kruskal and Prim-Jarnik’s MST algorithms

Kruskal is edge-focused, adding edges to the MST when they are the smallest available, while Prim-Jarnik is vertex-focused, adding edges to the MST when a vertex has the smallest weight between it and the MST set. They have the same runtime

23
New cards

What are Dynamic Algorithms

Recursive algorithms that divide problems into subproblems, but unlike divide & conquer, these subproblems are not of equal size. Only create a dynamic solution when the problem has optimal substructure

24
New cards

What is optimal substructure

In dynamic programming, optimal substructure is when the subproblems of a problem have no overlap, and the combined optimal solutions for the subproblems is the optimal solution for the main problem

25
New cards

How to design a dynamic algorithm

Consider how the solution might change if the input was a little bit smaller, either with one less element or split in the middle somewhere (and figure out where). Consider the different paths the algorithm might take, then the base cases, and build the recurrence bottom-up.

26
New cards

Why cant LCS be used to find palindromes

Palindromes are strings, or consecutive sequences. LCS only finds inconsecutive subsequences, not consecutive substrings

27
New cards

i=1n i

(n(n+1))/2

28
New cards

i=1n

(n(n+1)(2n+1))/6

29
New cards

what is the relation between edges and vertices if a graph is connected

n-1<=m