CMPS 355 Design and Analysis of Algorithms - Week 4: Algorithms Analysis
Analysis of Algorithms
Data Structure: A systematic method for organizing and accessing data.
Algorithm: A step-by-step procedure for performing a task in a finite time.
Correctness: An algorithm is correct if it produces the correct output for every input and eventually terminates.
Running Time: Measures the efficiency of algorithms or data structure operations, typically expressed in terms of input size n.
Running Time
Algorithms transform input objects into output objects.
Running time typically grows with n.
Average-case time is often difficult to determine.
Focus on worst-case running time because:
It is easier to analyze.
It is crucial for applications like games, finance, and robotics.
Limitations of Experiments
Implementing the algorithm may be difficult.
Results may not indicate running time on other inputs.
Comparing algorithms requires the same hardware and software environments.
Theoretical Analysis
Uses a high-level description of the algorithm instead of an implementation.
Actual code is not needed.
It is language-independent.
Takes into account all possible inputs.
Allows evaluation of speed independent of hardware/software.
Counting Primitive Operations
Count how many primitive operations are executed to analyze running time.
A primitive operation is a low-level instruction with constant execution time.
Assumption: running times of different primitive operations are fairly similar.
Characterizes running time as a function of the input size n.
Pseudocode
High-level description of an algorithm.
More structured than English, less detailed than a program.
Preferred notation for describing algorithms.
Hides program design issues.
Pseudocode Details
Control flow:
if … then … [else …]while … do …repeat … until …for … do …Indentation replaces braces.
Method declaration:
Algorithm method (arg [, arg…])Input …Output …
Method call:
method (arg [, arg…])Return value:
return expressionExpressions:
<-Assignment=Equality testingn^2 - Superscripts and mathematical formatting allowed
Primitive Operations
Basic computations performed by an algorithm.
Identifiable in pseudocode.
Largely independent of programming language.
Assume they take a constant amount of time in the RAM model.
Primitive Operations Examples
Evaluating an expression.
Assigning a value to a variable.
Indexing into an array.
Calling a method.
Returning from a method.
Counting Primitive Operations
By inspecting the pseudocode, we can determine the maximum number of primitive operations executed by an algorithm as a function of the input size.
Step-by-step Analysis
Examine the provided pseudocode:
Line 3: Primitive operations: 2
Line 4: Primitive operations: 2
Line 5: This will execute n−1 times. Loop operations: For n−1 iterations, there are 2 operations for each iteration (comparison and increment). Primitive operations in this line: 2(n−1)
Line 6: Primitive operations: 2 per iteration for n−1 iterations. Primitive operations in this line: 2(n−1).
Line 7: currentMax = data[j]; This assignment happens only if the condition in line 6 is true. In the worst case,
Primitive operations: 2 per iteration for n−1 iterations. Primitive operations in this line: 2(n−1).
Line 8: Primitive operations: 1
Total Number of Primitive Operations: 2+2+4(n−1)+1= 4n + 1
Asymptotic Analysis
A mathematical framework for comparing functions.
Focus on the growth rate of running time as a function of input size n.
Analyzes algorithms using mathematical notation that disregards constant factors.
Big-Oh Notation
Given functions f(n) and g(n), f(n) is O(g(n)) if there are positive constants c and n0 such that f(n) \leq cg(n) for n \geq n0.
Allows us to say that f(n) is “less than or equal to” g(n) up to a constant factor as n grows toward infinity.
Big-Oh Example
The function 2n + 10 is O(n).
2n + 10 \leq cn for all n \geq n_0.
(c − 2) n \geq 10.
n \geq \frac{10}{c − 2}.
Pick c = 3 and n_0 = 10.
Big-Oh notation allows us to say that a function f(n) is “less than or equal to” another function g(n) up to a constant factor in the asymptotic sense as n grows toward infinity.
Big-Oh Example
The function n^2 is not O(n).
n^2 \leq cn.
n \leq c.
The above inequality cannot be satisfied since c must be a constant.
Big-Oh Rules
If f(n) is a polynomial of degree d, then f(n) is O(n^d)
Choose the smallest possible class of functions:
Express the function in the most specific, least restrictive form.
2n is O(n) rather than O(n^2)
Use the simplest representation:
Ignore constants & lower terms when determining complexity.
3n+5 is O(n) rather than O(3n)
5n^3 + 2n^2 + 7n + 10 simplifies to O(n^3).
Big-Oh Practice
7n - 2 is O(n)
3n^3 + 20n^2 + 5 is O(n^3)
3 \log n + 5 is O(\log n)
2^{n+2} is O(2^n)
100 \log n + 2n is O(n)
n \log n + n^3 + 2n is O(n^3)
Big-Oh Example: ArrayMax
The algorithm arrayMax for computing the maximum element of an array of n numbers runs in O(n) time.
Algorithm arrayOp(A, n)
Input array A of n integers
Output maximum number
current <- A[1]
for i <- 2 to n do
if A[i] > current then
current <- A[i]
return current
Seven Important Functions
Constant ≈ 1
Logarithmic ≈ log n
Linear ≈ n
N-Log-N ≈ n log n
Quadratic ≈ n^2
Cubic ≈ n^3
Exponential ≈ 2^n
Relatives of Big-Oh
Big-Omega
f(n) is \Omega(g(n)) if there is a constant c > 0 and an integer constant n0 \geq 1 such that f(n) \geq cg(n) for n \geq n0.
g(n) is O(f(n)).
Big-Theta
f(n) is \Theta(g(n)) if there are constants c' > 0 and c'' > 0 and an integer constant n0 \geq 1 such that c'g(n) \leq f(n) \leq c''g(n) for n \geq n0.
f(n) is O(g(n)) and f(n) is \Omega(g(n)).
Intuition for Asymptotic Notation
Big-Oh:
f(n) is O(g(n)) if f(n) is asymptotically less than or equal to g(n).
Big-Omega:
f(n) is \Omega(g(n)) if f(n) is asymptotically greater than or equal to g(n).
Big-Theta:
f(n) is \Theta(g(n)) if f(n) is asymptotically equal to g(n).
Example1: Print element!
Given the following Java code:
public static void printElements (int[] a){
System.out.println("first element: "+a[0]);
System.out.println("last element: "+a[a.length-1]);
}
The time complexity is O(1).
Example: Element uniqueness (1)!
Given the following code:
for i in range(n):
for j in range(i+1, n):
if arr[i] == arr[j]:
return "Not Unique"
return "Unique"
The outer loop runs n times.
The inner loop runs n-1, n-2, …, 1 times.
Total number of comparisons is approximately \frac{n(n-1)}{2}, which is O(n^2).
Example: Element uniqueness (2)!
Given the following code:
arr.sort()
for i in range(1, n):
if arr[i] == arr[i - 1]:
return "Not Unique"
return "Unique"
Sorting takes O(n \log n).
The linear scan to check adjacent elements takes O(n).
The overall time complexity is O(n \log n).
Example: Element uniqueness (2)!
Given the following code:
unique_set = set()
for element in arr:
if element in unique_set:
return "Not Unique"
unique_set.add(element)
return "Unique"
Sorting takes O(n \log n).
The linear scan to check adjacent elements takes O(n).
The overall time complexity is O(n \log n).
Analyzing Code
Operations: constant time.
Consecutive Statements: sum of times.
Conditionals: larger branch plus test.
Loops: sum of iterations.
Function calls: cost of function body.
Recursive functions: solve recursive equation.
Nested Loops
for i = 1 to n do
for j = 1 to n do
sum = sum + 1
The time complexity is \sum{i=1}^{n} \sum{j=1}^{n} 1 = n^2
Dependent Nested Loops
for i = 1 to n do
for j = i to n do
sum = sum + 1
The time complexity is \sum{i=1}^{n} \sum{j=i}^{n} 1 = \sum_{i=1}^{n} (n - i + 1) = \frac{n(n+1)}{2}
Recursion
A recursive procedure can often be analyzed by solving a recursive equation.
Basic form: T(n) = \begin{cases} base \ case: some \ constant \ recursive \ case: T(subproblems) + T(combine) \end{cases}
Factors influencing the result:
(N) Number of subproblems.
(S) Size of subproblems.
(C) Combination cost.
Sum of Queue: Recursion Example
Write a recursive function called SumQueue(Q) that computes the sum of all elements in a queue Q.
SumQueue(Q)
if (Q.length == 0 )
return 0
else
return Q.dequeue() + SumQueue(Q)
(N) One subproblem
(S) Linear reduction in size (decrease by 1)
(C) Combining: constant (cost of 1 add)
T(0) = b
T(n) \leq c + T(n – 1) \ for \ n>0
T(n) = O(n)
Binary Search: Recursion Example
Write a recursive function called BinarySearch(arr, target, low, high) that finds the position of a target value in a sorted array.
BinarySearch(arr, target)
if (low > high)
→ Target not found
mid = (low + high) / 2
if (arr[mid] == target)
→ Return mid
else if (arr[mid] > target)
→ BinarySearch(left half)
else
→ BinarySearch(right half)
(N) 1
(S) decreases by half each time
(C) Constant
T(1) = b
T(n) \leq c+T(n/2) \ for \ n>0
T(n) = O(\log n)
Merge Sort: Recursion Example
Write a recursive function called MergeSort(arr) that sorts the array (arr).
MergeSort(arr):
if (arr.length <= 1):
return arr
mid = arr.length // 2
left = Algorithm(arr[0:mid])
right = Algorithm(arr[mid:])
merge(left, right)
(N) 2
(S) decreases by half each time
(C) O(n)
T(1) = b
T(n) \leq cn+2T(n/2) \ for \ n>1
T(n) = O(n \log n)
Sample Question: Recursion
def countDown(n):
if n == 0:
return
else:
print(n)
countDown(n - 1)
a) What does this function do?
b) Write the Recurrence function T(n)
c) Solve it to find O(n)
Summary
Algorithm running time analysis
Start with running time function, expressing number of computer steps in terms of input size
Focus on very large problem size, i.e., asymptotic running time
big-O notations => focus on dominating terms in running time function
Constant, linear, polynomial, exponential time algorithms …
I can help with that. I have educational knowledge about algorithm analysis, but if you would like to watch videos, you need access to another service.
I can help with that. I have educational knowledge about algorithm analysis, but if you would like to watch videos, you need access to another service.