Course: CS170 - Discrete Methods in Computer Science, Spring 2025
Instructor: Shaddin Dughmi
Content adapted from slides by Aaron Cote.
1. Quantifying Runtimes
2. Order Notation (Big-O and friends)
3. Comparing Runtimes
Definition of an Algorithm:
Takes an input and produces an output.
Example: Sorting an unsorted array.
Different Sorting Algorithms:
Algorithms like Bubble Sort, Merge Sort, Quick Sort, and Insertion Sort serve the same purpose but differ in process.
Factors for Comparison:
Runtime
Memory usage
Simplicity of the algorithm
Communication bandwidth required
Measuring Methodologies:
Time on the Clock:
Affected by architecture, number of processors, and hardware upgrades.
Number of Basic Operations:
Counts the number of fundamental instructions executed.
More reliable across different programming languages and architectures.
Preferred Measurement:
Number of Operations:
Standardized across different instruction sets, providing consistency in runtime evaluation.
Definition:
Evaluates the worst-case runtime for inputs of size n.
Example: The most time-consuming array to sort.
Resulting Function:
A function f : N → N, where f(n) indicates the maximum number of operations for inputs of size n.
Function is generally non-decreasing in practice.
Input Size Definition:
Input may refer to bits used in representation or the length of the array, depending on context.
Emphasis on Worst-Case Runtime:
Predetermined guarantees applicable under all scenarios, not reliant on assumptions about real-world input characteristics.
Contributes to robust theoretical foundations within computer science that have practical implications.
Average Case Considerations:
Sometimes relevant, though typically more complex.
Not emphasized in this class.
Constant:
Examples: 3, 5, 134893430
Linear:
Examples: n, 2n + 1
Quadratic:
Examples: n^2, 3n^2 + 1000n - 1
Polynomial:
Examples: 2n^5 + n^3 - n + 2
Logarithmic:
Examples: log n, 5 log n
Exponential:
Examples: 2^n, 3 · 5^n + n^2
Granularity Assessment:
Focus on aspects of runtime that remain stable through varying architecture, basic operations, and core counts.
Ignore constant multiples and lesser terms to determine practical behavior at larger input sizes.
Purpose of Order Notation:
Standardizes the evaluation of algorithms based on the asymptotic growth rates accrued as the input sizes increase.
Definition of Big-O:
For functions f(n) and g(n), f(n) = O(g(n)) if there are constants n0 and c such that f(n) ≤ c*g(n) for all n ≥ n0.
It describes the asymptotic behavior as the inputs grow.
Alternative Definition:
f(n) = O(g(n)) if lim (n→∞) (f(n) / g(n)) < ∞, typically equivalent when the limit exists.
Notation Abuse:
Instead of using =, it should denote that f belongs to the class O(g), meaning f(n) does not grow faster than g(n).
Examples of Functions and Notations:
10n^3 = O(n^3)
10000n = O(n^2)
log n = O(n)
10000n^100 = O(2^n)
Notation Definitions:
f(n) = Ω(g(n)): lower bound (f is greater than or equal to g).
f(n) = Θ(g(n)): tight bound (f and g grow at the same rate).
f(n) = o(g(n)): f grows slower than g (upper bound).
f(n) = ω(g(n)): f grows faster than g (lower bound).
Summary of Relations:
Think of O, Ω, Θ, o, ω as relative comparisons of growth (≤, ≥, =, <, > respectively).
Compare log n and √√n (Hint: Use L'Hopital's rule).
Order the following runtimes:
n^n
log² n
n^1.01
1.01^m
2√log n
n log 1000
n
Runtimes are generally assessed in the following order:
Constants
Logarithmic functions
Polynomials
Exponential functions
Although these are the primary classes, other complexities may arise in practical scenarios.