O(n) means a loop does n things. Think of it like baking n cookies: more cookies, more time.
Worst-case: The longest possible time, like the worst traffic on your commute.
Average-case: The usual time, like a normal commute.
Best-case: The shortest possible time, like an empty highway.
One problem, many solutions. Like driving: you can walk, bike, or drive.
Factorials: Iterative (step-by-step) or recursive (self-calling).
Sorting: Shellsort, heapsort, quicksort, etc.
Computational Complexity: Finds the best solution.
Efficiency matters with big data. Imagine searching a phone book with 10 names vs. the entire world's phone numbers.
Compares algorithm efficiency. Like judging a race by effort and resources used.
Time and space are key, with time being most important.
Complexity is system-dependent. A fast car (C++) beats a slow bike (Basic).
Compare algorithms on the same "track."
Don't use real-time. Use logical units: relate data size (n) to processing time (t).
Linear Relationship: t = cn. Double the data, double the time. Like buying apples: more apples, more cost.
Logarithmic Relationship: t = \log_2{n}. Doubling n adds one unit to t. Like halving a loaf of bread: each cut takes the same time.
Simplify time functions for big data. Ignore small stuff to see the big picture.
This simplification is Asymptotic Complexity.
Complexity theory: Studies resources (time, memory) needed to solve problems.
Time = steps, space = memory.
Other resources: Number of processors.
Differs from computability theory: Can it be solved at all?
A "problem" is a set of questions, each a string. A question is an instance.
Example: FACTORIZE