1/54
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
in place
only uses O(1) extra memory, sorted array given back in input array with no extra auxiliary arrays
stable
if a appears before b in the initial array and a and b are compared as equal, then a appears before b in the final array
goals for sorting
in place, stable, fast
Insertion Sort: best case
O(n)
Insertion Sort: worst case
O(n^2)
Selection Sort: best case/worst case
O(n^2)
Insertion Sort
maintains sorted subarray at front, inserting next element of unsorted part until subarray is full, in place and stable
Selection Sort
maintains a sorted subarray at front, inserting next smallest element of unsorted part until subarray is full, in place and stable
Heap Sort
selection sort where unsorted part is a heap, in place but not stable, allows selection sort to come
O(n log(n))
Merge Sort
splits the array in half recursively, sorting and then merging the halves,
T(n) = 2T(n/2) + c1 n, c2 n otherwise, O(n log(n)), stable if merged correctly, but not in place
Quick Sort
divides based on relation to a pivot, pivot chosen using median of three, stable but not in place, uses swapping to figure out if elements in subarray are larger/smaller than pivot
Quick Sort: completely sequential best case
O(n log(n)) (pivot is the median)
Quick Sort: completely sequential worst case
O(n^2) (pivot is smallest or largest element)
Quick Sort: sequential partitioning, parallel recursive sorting span
O(n)
Quick Sort: completely parallel span
O(log^2(n))
Merge Sort: completely parallel span
O(log^3(n))
Bucket Sort
sort elements into buckets, O(m+n) where m is the number of buckets and n is the number of elements, beat lower bound by comparing in O(1) time
Radix sort
sorts each digit into buckets one at a time, stable, useful for ints and strings that aren't too large, running time O((n+r)d) where d is the number of digits and r is the radix
lower bound for comparison based sorting
Omega(n log(n)) = log_2(n!)
Amdahl's Law
T1/TP <= 1 / (S + (1-S)/P)
Speedup
T1 / TP
Parallelism
T1 / Tinfinity = 1 / S
work
running time without parallelism
span
longest path to graph of computation
Reduce
reduces an array to a single element or answer, usually uses RecursiveTask
Map
performs an operation on each element of input, producing an array of outputs, usually uses RecursiveAction
Prefix
creates an output array based on each element and all elements before, work O(n) span O(log(n))
Pack/Filter
packs original input array into a smaller output array based on a given condition (map, prefix, map), work O(n) span O(log(n))
deadlock
cycle of dependencies, where threads are waiting for lock held by late thread, solutions include synchronize and smaller critical sections
atomic
an operation no other threads can interrupt or interleave with
re entrant locks
not held by a single method call, execution can re-enter another critical section while holding the same lock
try{}finally{}
allows thread to release a lock if it throws an exception while holding a lock
bad interleaving
when two threads access resources in an overlapping process, causing bugs
Adjacency Matrix
a representation of a graph in which a boolean matrix contains a 1 at position (i,j) iff there is an arc from node i to node j.
adjacency list
a representation of a graph in which each node has a list of nodes that are adjacent to it, i.e. connected to it by an arc.
Breath First Search
-start at the root node
-can each node in the first level starting from the leftmost node, moving towards the right
-then you continue scanning the second level (starting from the left) and the third level, and so on until you've scanned all the nodes
-pointers are stored in FIFO queue
-running time O(V+E)
Iterative Deepening
IDDFS calls DFS for different depths starting from an initial value. In every call, DFS is restricted from going beyond given depth. So basically we do DFS in a BFS fashion.
Hamiltonian Circuit
A circuit using distinct edges of a graph that starts and ends at a particular vertex of the graph and visits each vertex once and only once. A Hamiltonian circuit can start at any one of its vertices.
Topological Ordering
a linear ordering of vertices such that for every directed edge uv, vertex u comes before v in the ordering.
parallel prefix algorithm
Np
Nondeterministic Polynomial, Hamiltonian Circuit
p
polynomial, eular circuit
exp
NxN Chess
Eular Circuit
An Euler circuit is a circuit that uses every edge of a graph exactly once
start new fork
fjPool.invoke(new LengthSumTask
Race Condition
A state where two subjects can access the same object without proper mediation
deadlock
two computer programs sharing the same resource are effectively preventing each other from accessing the resource
data race
Two memory accesses form a data race if they are from different threads to same location, at least one is a write, and they occur one after another.
NP-Complete
problems with no practical algorithmic solutions If any one NPcomplete problem could be solved in polynomial time, then all NP-complete problems
could be solved in polynomial time.
t1
execution time for parallel program on one processor
tp
execution time for parallel program on P processors.
t infinity
execution time for parallel program on infinite processors
double hashing
The first hash function determines the original location where we should try to place the
item. If there is a collision, then the second hash function is used to determine the probing
step distance as 1h2(key), 2h2(key), 3*h2(key) etc. away from the original location.
re-hashing
Re-hashing is creating a new table (usually ~twice as large and a prime number) and then
rehashing the values from the original table and moding by the new tablesize and inserting
into the new table.
disadvantage of quadratic hashing
In quadratic probing, if the table is more than half full (load factor = 0.5) then you are not
guaranteed to be able to find a location to place the value in.