CSE 332

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/54

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

55 Terms

1
New cards

in place

only uses O(1) extra memory, sorted array given back in input array with no extra auxiliary arrays

2
New cards

stable

if a appears before b in the initial array and a and b are compared as equal, then a appears before b in the final array

3
New cards

goals for sorting

in place, stable, fast

4
New cards

Insertion Sort: best case

O(n)

5
New cards

Insertion Sort: worst case

O(n^2)

6
New cards

Selection Sort: best case/worst case

O(n^2)

7
New cards

Insertion Sort

maintains sorted subarray at front, inserting next element of unsorted part until subarray is full, in place and stable

8
New cards

Selection Sort

maintains a sorted subarray at front, inserting next smallest element of unsorted part until subarray is full, in place and stable

9
New cards

Heap Sort

selection sort where unsorted part is a heap, in place but not stable, allows selection sort to come

O(n log(n))

10
New cards

Merge Sort

splits the array in half recursively, sorting and then merging the halves,

T(n) = 2T(n/2) + c1 n, c2 n otherwise, O(n log(n)), stable if merged correctly, but not in place

11
New cards

Quick Sort

divides based on relation to a pivot, pivot chosen using median of three, stable but not in place, uses swapping to figure out if elements in subarray are larger/smaller than pivot

12
New cards

Quick Sort: completely sequential best case

O(n log(n)) (pivot is the median)

13
New cards

Quick Sort: completely sequential worst case

O(n^2) (pivot is smallest or largest element)

14
New cards

Quick Sort: sequential partitioning, parallel recursive sorting span

O(n)

15
New cards

Quick Sort: completely parallel span

O(log^2(n))

16
New cards

Merge Sort: completely parallel span

O(log^3(n))

17
New cards

Bucket Sort

sort elements into buckets, O(m+n) where m is the number of buckets and n is the number of elements, beat lower bound by comparing in O(1) time

18
New cards

Radix sort

sorts each digit into buckets one at a time, stable, useful for ints and strings that aren't too large, running time O((n+r)d) where d is the number of digits and r is the radix

19
New cards

lower bound for comparison based sorting

Omega(n log(n)) = log_2(n!)

20
New cards

Amdahl's Law

T1/TP <= 1 / (S + (1-S)/P)

21
New cards

Speedup

T1 / TP

22
New cards

Parallelism

T1 / Tinfinity = 1 / S

23
New cards

work

running time without parallelism

24
New cards

span

longest path to graph of computation

25
New cards

Reduce

reduces an array to a single element or answer, usually uses RecursiveTask

26
New cards

Map

performs an operation on each element of input, producing an array of outputs, usually uses RecursiveAction

27
New cards

Prefix

creates an output array based on each element and all elements before, work O(n) span O(log(n))

28
New cards

Pack/Filter

packs original input array into a smaller output array based on a given condition (map, prefix, map), work O(n) span O(log(n))

29
New cards

deadlock

cycle of dependencies, where threads are waiting for lock held by late thread, solutions include synchronize and smaller critical sections

30
New cards

atomic

an operation no other threads can interrupt or interleave with

31
New cards

re entrant locks

not held by a single method call, execution can re-enter another critical section while holding the same lock

32
New cards

try{}finally{}

allows thread to release a lock if it throws an exception while holding a lock

33
New cards

bad interleaving

when two threads access resources in an overlapping process, causing bugs

34
New cards

Adjacency Matrix

a representation of a graph in which a boolean matrix contains a 1 at position (i,j) iff there is an arc from node i to node j.

35
New cards

adjacency list

a representation of a graph in which each node has a list of nodes that are adjacent to it, i.e. connected to it by an arc.

36
New cards

Breath First Search

-start at the root node

-can each node in the first level starting from the leftmost node, moving towards the right

-then you continue scanning the second level (starting from the left) and the third level, and so on until you've scanned all the nodes

-pointers are stored in FIFO queue

-running time O(V+E)

37
New cards

Iterative Deepening

IDDFS calls DFS for different depths starting from an initial value. In every call, DFS is restricted from going beyond given depth. So basically we do DFS in a BFS fashion.

38
New cards

Hamiltonian Circuit

A circuit using distinct edges of a graph that starts and ends at a particular vertex of the graph and visits each vertex once and only once. A Hamiltonian circuit can start at any one of its vertices.

39
New cards

Topological Ordering

a linear ordering of vertices such that for every directed edge uv, vertex u comes before v in the ordering.

40
New cards

parallel prefix algorithm

41
New cards

Np

Nondeterministic Polynomial, Hamiltonian Circuit

42
New cards

p

polynomial, eular circuit

43
New cards

exp

NxN Chess

44
New cards

Eular Circuit

An Euler circuit is a circuit that uses every edge of a graph exactly once

45
New cards

start new fork

fjPool.invoke(new LengthSumTask

46
New cards

Race Condition

A state where two subjects can access the same object without proper mediation

47
New cards

deadlock

two computer programs sharing the same resource are effectively preventing each other from accessing the resource

48
New cards

data race

Two memory accesses form a data race if they are from different threads to same location, at least one is a write, and they occur one after another.

49
New cards

NP-Complete

problems with no practical algorithmic solutions If any one NPcomplete problem could be solved in polynomial time, then all NP-complete problems

could be solved in polynomial time.

50
New cards

t1

execution time for parallel program on one processor

51
New cards

tp

execution time for parallel program on P processors.

52
New cards

t infinity

execution time for parallel program on infinite processors

53
New cards

double hashing

The first hash function determines the original location where we should try to place the

item. If there is a collision, then the second hash function is used to determine the probing

step distance as 1h2(key), 2h2(key), 3*h2(key) etc. away from the original location.

54
New cards

re-hashing

Re-hashing is creating a new table (usually ~twice as large and a prime number) and then

rehashing the values from the original table and moding by the new tablesize and inserting

into the new table.

55
New cards

disadvantage of quadratic hashing

In quadratic probing, if the table is more than half full (load factor = 0.5) then you are not

guaranteed to be able to find a location to place the value in.