Algorithms

Introduction to Problem Solving and Algorithms

  • Every day presents numerous problems, each with one or more possible solutions.

    • Solutions may vary in efficiency; typically, the most efficient solution is preferred.

    • Example: While commuting between home and an office or school, multiple paths are available, but the shortest path is usually chosen.

    • This principle applies in computational problems: various algorithms can be designed as potential solutions, and the most efficient one is selected.

Definition of an Algorithm

  • An algorithm is defined as a set of steps to solve a computational problem.

    • In computer science, when solving a problem, these steps are synthesized into an algorithm.

    • For instance, to find the sum of two integers, "a" and "b":

    1. Input two integers.

    2. Create a variable "sum" to hold the sum of the two integers.

    3. Assign the sum of "a" and "b" to the "sum" variable.

    4. Return the "sum" variable.

Sample Implementation of Algorithm

  • Example implementation in pseudo-code to calculate the sum:

int findSum(int a, int b) {
    int sum; // Creating the sum variable
    sum = a + b; // Storing the sum of a and b
    return sum; // Returning the sum variable
}
  • In this example, the three components are:

    • Input: The integers "a" and "b".

    • Algorithm: The steps outlined to compute the sum.

    • Output: The resulting sum.

Input in Algorithm Design

  • Input refers to the data on which the algorithm operates, akin to raw materials processed by a machine into a finished product.

    • Example Input: The integers "a" and "b".

    • Pre-algorithm considerations include:

    • Data type of input

    • Range or distribution of input

    • Analyze the input critically before developing the algorithm.

Characteristics of an Algorithm

  1. Algorithm:

    • A well-defined step-by-step procedure to take one or more values as input and produce a corresponding set of values as output.

    • In the sum example, the three procedural steps constitute the algorithm.

  2. Output:

    • The desired outcome of the problem; should consistently yield the correct result for each possible input.

Criteria for a Good Algorithm

  • There are multiple algorithms for a given problem. Criteria to determine the quality of an algorithm include:

    1. Correctness:

    • An algorithm is deemed correct if it generates the correct output for every possible input. If incorrect output occurs for any input, the algorithm is flawed.

    1. Finiteness:

    • An important factor often overlooked;

    • The algorithm must terminate after a finite number of steps, avoiding scenarios like infinite loops or stack overflows.

    1. Efficiency:

    • An efficient algorithm optimally utilizes system resources, characterized by:

      • Minimal computational time (the duration required to yield output for a given input).

      • Minimal memory usage.

    • There exists a trade-off between time and space: prioritize computational time or memory usage based on specific use cases.

Algorithm Efficiency

  • Algorithm efficiency is primarily defined by:

    • Space Complexity: The total space required for the algorithm's operation across various input sizes.

    • Example: Creating an array of size "n" indicates a space complexity of $O(n)$.

    • Time Complexity: The quantity of operations performed by the algorithm relative to input size (assuming equal time consumption for each operation).

    • Time complexity for sorting: defined by the total number of items sorted.

    • Hyper-dimensional statement execution time calculated based on the speed of execution across system interactions.

Understanding Efficient Algorithm Selection

  • When comparing multiple algorithms for the same task:

    • Running them on differing machines yields unreliable evaluations due to processor variances.

    • Running them on the same machine may still misrepresent results due to concurrent executions affecting timing.

Standard Notation for Algorithm Analysis

  • An efficient way to analyze algorithms involves asymptotic notation, which disregards system configuration and focuses on input size's growth rate effect.

    • Provides insight into how execution time and space consumption evolve with varying input sizes.

Asymptotic Notations

  • Three key asymptotic notations aid in representing time complexity:

    1. Θ Notation (Theta): Represents the tight bound on growth rates.

    2. Big O Notation: Defines the upper limit of an algorithm's complexity.

    3. Ω Notation: Represents the lower limit of an algorithm's complexity.

Cases in Algorithm Time Complexity

  • Algorithms exhibit various performance levels based on input type:

    1. Best Case: Represents the minimal operations needed.

    2. Average Case: Accounts for the average running time across all potential inputs and their distributions.

    3. Worst Case: Reflects the maximum operations required to complete the task.

Example of Searching in an Array

  • Problem: Determine if integer "k" exists within array "arr":

    • Input: An array of size "n" and the integer "k".

    • Output: Return 1 if "k" is found; otherwise return 0.

  • Possible solution with linear search to iterate over the array:

int searchK(int arr[], int n, int k) {
    for (int i = 0; i < n; i++) {
        if (arr[i] == k) return 1;
    }
    return 0;
}

Time Complexity Analysis

  • In code execution, each operation denotes a constant, identified as "C":

    • If executed "N" times, the total time complexity equates to $C \times N$.

    • Example scenarios involving search operations yield running times contingent on position within the array.

Complexity Cases Example

  • Best Case: Occurs if the sought number is in the first position of the input array (1 second).

  • Average Case: If the number appears in the middle position (e.g., takes approximately half the array's length).

  • Worst Case: When the number is not found, necessitating examination of all elements.

Big O Notation Explained

  • Big O Notation expresses the upper bound of algorithm time complexity; denotes the worst-case scenario.

    • General form: If function g(n) exists, represented as
      O(g(n)) = {f(n) : \text{there exist positive constants } c \text{ and } n0 \text{ such that } 0 \leq f(n) \leq c g(n) \text{ for all } n \geq n0 }

Ignoring Constants and Non-Dominating Terms

  • Algorithmic analysis often disregards constant factors and non-dominating terms when evaluating efficiency.

    • Regardless of constant multiplicative impacts, growth rates dictate algorithm performance, focusing on leading growth factors (e.g., for $n-1$, the focus is on $n$).

Conclusion on Big O Complexity

  • Expression for Big O strengthening understanding:

    • If you define function properties with respect to constants, it's potential to target simplistically valuable outputs.

Useful Mathematical Summations

  • Summation formula for natural numbers:
    1 + 2 + 3 + … + n = \frac{n(n+1)}{2}

  • Geometric series:
    a^0 + a^1 + a^2 + … + a^{n-1} = \frac{a^n - 1}{a - 1}

  • Understanding algorithm efficiency demands robust comprehension of time/space trade-offs, algorithms' asymptotic bounds, and real-world implications of computed results.