Algorithms and Complexity Analysis
ALGORITHMS AND COMPLEXITY ANALYSIS
1.0 INTRODUCTION
1.1 What is an algorithm?
An algorithm is defined as a finite set of step-by-step procedures or instructions that can be followed to solve a particular problem.
It is a distinct computational procedure that takes an input and produces a result (output) by means of the input.
1.2 Properties of an algorithm
Finiteness: An algorithm must always terminate after a finite number of steps.
Definiteness/Precise: Each step of an algorithm must be precisely defined; the actions to be carried out need to be rigorously and unambiguously specified for each case.
Input: An algorithm has zero or more inputs, which are the quantities provided to it initially before the algorithm begins.
Output: An algorithm has one or more outputs, i.e., quantities which have a specified relation to the inputs.
Effectiveness: An algorithm must be effective, meaning all operations should be basic enough to be performed exactly and in a finite length of time. It must also achieve its intended function.
Feasibility: An algorithm must be practical, such that each instruction can be realistically carried out.
Independence: An algorithm must be language independent, meaning it can be implemented in any programming language without altering the result.
1.3 Importance of Algorithm in Computer Science
Algorithms are crucial in computer science as they form the basis of computer programming and are used to tackle problems ranging from simple tasks like sorting and searching to complex applications in artificial intelligence and machine learning.
1.4 Types of Algorithms based on Implementation
Iterative Algorithms:
These algorithms execute repeatedly until a specific condition is met. They utilize looping constructs (e.g., for, while, do-while loops).
They generally use less memory and have a linear, straightforward control flow.
Recursive Algorithms:
These algorithms solve problems by repeatedly calling themselves with a smaller, modified input until a base condition is reached.
This often results in more elegant and shorter code but typically uses more memory due to the overhead associated with the call stack.
Both types can solve the same problems, with the choice depending on the specific problem context.
1.5 Algorithm Analysis (Performance Analysis)
Algorithm analysis refers to the theoretical study of a computer program's performance and resource usage, determining the computational complexity of algorithms (time, storage, and other resources necessary to execute an algorithm).
The analysis needs to evaluate how good or bad an algorithm is regarding efficiency. There might be multiple algorithms to solve a single problem, and analysis allows for comparing them in terms of performance and resource usage.
1.6 Example of Algorithm Analysis
Assume we have a problem P1 with many solutions, each being an algorithm (Algol, Algo2, Algo3,… Algo n). Before implementing an algorithm, we first analyze its time and space to find the best-fitting algorithm for the task.
1.7 Computational Complexity
Computational complexity refers to the efficiency of an algorithm with respect to the required time and memory.
Complexity is measured as a function of its input size; time and space complexity estimate the time and memory requirements of an algorithm, respectively.
The input size (n) usually refers to the total number of items in the input dataset.
1.6.1 Types of Algorithm Analysis
Prior Analysis:
Analysis is performed before execution.
It is hardware-independent.
Posterior Analysis:
Analysis is conducted after execution, dependent on the computer configuration and hardware.
1.7.1 What is Space Complexity?
Space Complexity defines the amount of computer memory required to execute an algorithm successfully. It typically accounts for:
To store program instructions.
To store constant values.
To store variable values.
For other ancillary functions like function calls or jumping statements.
It assesses how much memory is used as the input size grows.
Time Complexity
Time Complexity refers to the time required by an algorithm to execute based on its input size (n).
Different Time Complexities of the Analysis of an Algorithm
There are three key types of time complexities generated during analysis:
Worst-case time complexity: Maximum time taken to complete execution.
Average-case time complexity: Average performance of an algorithm across various inputs.
Best-case time complexity: Minimum time taken to complete execution.
Rules for Step Count Method
Timing of an algorithm can be analyzed using the Step Count Method, where:
Comments: Ignored during execution as they don't affect timing.
Assignments: Counted as one execution.
Conditionals: Execute once for checks.
Loops: Executed multiple times based on the iterator's conditions.
2.0 ALGORITHM DESIGN TECHNIQUES
1. Divide and Conquer
Breaks down a problem into smaller independent subproblems, recursively solves them, and combines their results to derive the solution.
Examples: Merge Sort, Quick Sort.
2. Dynamic Programming
Similar to Divide and Conquer but used when subproblems overlap; memoization is used to store results to avoid redundant recalculations.
Examples: 0-1 Knapsack, Subset Sum Problem, Dijkstra’s Shortest Path.
3. Greedy Algorithms
Make locally optimal choices at each step. May not guarantee a global optimum but can be more efficient for certain problems.
Example: Fractional Knapsack problem.
4. Branch and Bound
An optimization technique systematically exploring the solution space, typically employed for integer programming and combinatorial optimization problems.
5. Recursion
A programming approach where functions call themselves on smaller instances of the same problem—integral to various algorithm design techniques.
3.0 Algorithm Analysis Techniques
To assess algorithms, one can employ theoretical measures that express complexity influences by input size and algorithm resolution characteristics.
4.0 COMMON ALGORITHMIC COMPLEXITIES
Overview of Time Complexities:
Constant: O(1)
Linear: O(n)
Logarithmic: O(log n)
Linearithmic: O(n log n)
Quadratic: O(n²)
Cubic: O(n³)
Exponential: O(2^n)
Factorial: O(n!)
Examples of Time Complexities:
O(1) - Access time constant regardless of input size, e.g., accessing a hashmap.
O(n) - Linear search through an array.
O(log n) - Binary search.
O(n²) - Bubble sort or selection sort (nested loops).
O(2^n) - Fibonacci for each computation recursively.
O(n!) - Generating permutations.
5.0 Time Complexity of Popular Algorithms
Sorting Algorithms:
Quick Sort: O(n log n)
Merge Sort: O(n log n)
Bubble Sort: O(n²)
Searching Algorithms:
Linear Search: O(n)
Binary Search: O(log n)
5.1 Time Complexity of Various Algorithms
Each sorting and searching algorithm has distinct time complexities that impact efficiency in various scenarios, especially concerning large datasets.
A careful selection of algorithms based on their computational complexities is vital for optimal performance in software engineering.
Summary
This overview has outlined foundational concepts in algorithms, their properties, analysis mechanisms, and complexity assessments essential for understanding algorithm effectiveness in computer science.