C949 Data Structures & Algorithms

0.0(0)
studied byStudied by 122 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/75

flashcard set

Earn XP

Description and Tags

C949 Data Structures and Algorithms Study Guide v4

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

76 Terms

1
New cards

Characteristics of Algorithms

Finiteness, definiteness, input and output, generality and effectiveness. These attributes ensure that an algorithm can be executed in a structured manner to solve a problem.

2
New cards

Characteristics of Algorithms - Finiteness

An algorithm must always terminate after a finite number of steps, ensuring it does not run indefinitely.

3
New cards

Characteristics of Algorithms - Definiteness

Each step of the algorithm must be precisely defined, with clearly stated instructions, ensuring that its execution is understood and can be taken easily.

4
New cards

Characteristics of Algorithms - Input

An algorithm requires one or more inputs. The values that are first supplied to the algorithm before its processing are known as inputs. These inputs come from a predetermined range of acceptable values.

5
New cards

Characteristics of Algorithms - Output

One or more outputs must be produced by an algorithm. The output is the outcome of the algorithm after every step has been completed. The relationship between the input and the result should be clear.

6
New cards

Characteristics of Algorithms - Effectiveness

An algorithm's stages must be sufficiently straightforward to be carried out in a finite time utilizing fundamental operations. With the resources at hand, every operation in the algorithm should be doable and practicable.

7
New cards

Characteristics of Algorithms - Generality

Rather than being limited to a single particular case, an algorithm should be able to solve a group of issues. It should offer a generic fix that manages a variety of inputs inside a predetermined range or domain.

8
New cards

Factors of an Algorithm

Modularity, Correctness, Maintainability, Functionality, Robustness, User-friendly, Simplicity, Extensibility

9
New cards

Modularity

Breaking problems into smaller, manageable modules.

10
New cards

Correctness

Outputs match expected results from given inputs.

11
New cards

Maintainability

Structured design allows easy modifications without major changes.

12
New cards

Functionality

Logical steps addressing real-world problem-solving.

13
New cards

Robustness

Clearly defines the problem the algorithm addresses.

14
New cards

User-friendly

Easily understandable for programmers and designers.

15
New cards

Simplicity

Simple algorithms are easier to comprehend.

16
New cards

Extensibility

Allows other programmers to build upon the algorithm.

17
New cards

Types of Algorithms

Brute Force, Recursive, Encryption, Backtracking, Searching, Sorting, Hashing, Divide and Conquer, Greedy, Dynamic Programming, Randomized

18
New cards

Brute Force Algorithm

A straightforward approach that exhaustively tries all possible solutions, suitable for small problem instances but may become impractical for larger ones due to its high time complexity.

19
New cards

Recursive Algorithm

A method that breaks a problem into smaller, similar subproblems and repeatedly applies itself to solve them until reaching a base case, making it effective for tasks with recursive structures.

20
New cards

Encryption Algorithm

Utilized to transform data into a secure, unreadable form using cryptographic techniques, ensuring confidentiality and privacy in digital communications and transactions.

21
New cards

Backtracking Algorithm

A trial-and-error technique used to explore potential solutions by undoing choices when they lead to an incorrect outcome, commonly employed in puzzles and optimization problems.

22
New cards

Searching Algorithm

Designed to find a specific target within a dataset, enabling efficient retrieval of information from sorted or unsorted collections.

23
New cards

Sorting Algorithm

Aimed at arranging elements in a specific order, like numerical or alphabetical, to enhance data organization and retrieval.

24
New cards

Hashing Algorithm

Converts data into a fixed-size hash value, enabling rapid data access and retrieval in hash tables, commonly used in databases and password storage.

25
New cards

Divide and Conquer Algorithm

Breaks a complex problem into smaller subproblems, solves them independently, and then combines their solutions to address the original problem effectively.

26
New cards

Greedy Algorithm

Makes locally optimal choices at each step in the hope of finding a global optimum, useful for optimization problems but may not always lead to the best solution.

27
New cards

Dynamic Programming Algorithm

Stores and reuses intermediate results to avoid redundant computations, enhancing the efficiency of solving complex problems.

28
New cards

Randomized Algorithm

Utilizes randomness in its steps to achieve a solution, often used in situations where an approximate or probabilistic answer suffices.

29
New cards

Concepts of Recursive Algorithms - Base Case

This is the condition under which the recursion stops. It represents the simplest instance of the problem, which can be solved directly without further recursion.

30
New cards

Concepts of Recursive Algorithms - Recursive Case

This is the part of the algorithm that breaks the problem down into smaller instances of the same problem and then calls the algorithm recursively on these smaller instances.

31
New cards

Concepts of Recursive Algorithms - Stack

Each recursive call is placed on the system call stack. When the base case is reached, the stack begins to unwind as each instance of the function returns its result.

32
New cards

Advantages of Recursion - Simplicity

Recursive solutions are often more elegant and easier to understand than their iterative counterparts.

33
New cards

Advantages of Recursion - Direct Translation

Some problems are naturally recursive, like tree traversals, making recursion the most straightforward approach.

34
New cards

Disadvantages of Recursion - Performance

Recursive algorithms can be less efficient due to the overhead of multiple function calls and potential stack overflow issues for deep recursion.

35
New cards

Disadvantages of Recursion - Memory Usage

Recursion can consume more memory because each function call adds a new frame to the call stack.

36
New cards

When to Use Recursion

  • When a problem can naturally be divided into similar sub-problems (e.g., tree traversal, searching algorithms like binary search).

  • When the recursive solution is significantly simpler or more intuitive than an iterative one.

37
New cards

Name All Searching Algorithms

  • Linear Search

  • Binary Search

  • Interpolation Search

  • Depth-First Search (DFS)

  • Breadth-First Search (BFS)

38
New cards

Linear Search - Concept

  • Linear search is the simplest search algorithm.

  • It works by sequentially checking each element of the array or list until the target element is found or the end of the collection is reached.

39
New cards

Linear Search - Algorithm

  1. Start from the first element of the array.

  2. Compare the current element with the target element.

  3. If they match, return the index of the element.

  4. If they don't match, move to the next element and repeat the process.

  5. If the target element is not found by the end of the array, return a "not found" indication.

40
New cards

Linear Search - Time Complexity

O(n), where n is the number of elements in the array. This is because in the worst case, the algorithm may need to check every element in the array.

41
New cards

Linear Search - When to Use

  • When the array or list is small.

  • When the array is unsorted.

  • When simplicity is more important than performance.

42
New cards

Linear Search - Code

def linear_search(arr, target):
    for i in range(len(arr)):
        if arr[i] == target:
            return i
    return -1  # Return -1 if the element is not found

43
New cards

Binary Search - Concept

  • Binary search is much more efficient than linear search but requires the array or list to be sorted.

  • It works by repeatedly dividing the search interval in half. If the target value is less than the middle element, the search continues in the left half, otherwise in the right half.

44
New cards

Binary Search - Algorithm

  1. Start with two pointers, one at the beginning (low) and one at the end (high) of the sorted array.

  2. Find the middle element of the current interval.

  3. Compare the middle element with the target:

    • If they match, return the index of the middle element.

    • If the target is less than the middle element, repeat the search on the left half.

    • If the target is greater, repeat the search on the right half.

  4. If the interval becomes invalid (low > high), return a "not found" indication.

45
New cards

Binary Search - Time Complexity

O(log⁡ n), where n is the number of elements in the array. This logarithmic time complexity makes binary search significantly faster than linear search for large datasets.

46
New cards

Binary Search - When to Use

  • When the array or list is sorted.

  • When the array is large and efficiency is crucial.

47
New cards

Binary Search - Code

def binary_search(arr, target):
    low = 0
    high = len(arr) - 1

    while low <= high:
        mid = (low + high) // 2
        if arr[mid] == target:
            return mid
        elif arr[mid] < target:
            low = mid + 1
        else:
            high = mid - 1

    return -1  # Return -1 if the element is not found

48
New cards

Linear Search vs Binary Search - Efficiency, Simplicity, Use Case

  • Efficiency: Binary search is faster than linear search, especially for large datasets, but it requires the array to be sorted.

  • Simplicity: Linear search is simpler to implement and doesn't require the array to be sorted, making it more versatile for smaller or unsorted datasets.

  • Use Cases:

    • Linear Search: Suitable for small or unsorted collections where the simplicity of the algorithm outweighs the need for speed.

    • Binary Search: Ideal for large, sorted collections where performance is a priority.

49
New cards

Binary Search - Example

Initial Setup:

  • The array arr is {45, 77, 89, 90, 94, 99, 100}.

  • The key to find is 100.

  • Initialize two pointers: low (start of the array index) and high (end of the array index).

First Iteration:

  • Calculate the middle index mid using the formula: mid = (low + high) / 2

  • Check the value at arr[mid].

  • Compare arr[mid] with the key:

    • If arr[mid] is less than key, update low to mid + 1.

    • If arr[mid] is greater than key, update high to mid - 1.

    • If arr[mid] is equal to key, you have found the key (though you won't need a second iteration in this case).

Second Iteration:

  • Repeat the calculation for mid with the updated low and high values.

Again, compare arr[mid] with the key and update low or high accordingly.

50
New cards

Searching Algorithms

Linear Search, Binary Search, Interpolation Search, Depth-First Search

51
New cards

Linear Search - Concept, Time Complexity, Use Case, Distinct Characteristics, Worst/Average,Best

  • Concept: As discussed earlier, linear search involves checking each element in a list or array sequentially until the target element is found or the end of the collection is reached.

  • Time Complexity: O(n), where n is the number of elements.

  • Use Case: Best used when the list is small or unsorted.

  • Distinct Characteristics:

    • Simple, sequential search.

    • Checks each element one by one.

    • Works on both sorted and unsorted data.

  • Linear Search Worst, Average and Best

    • Best Case: O(1) — The target element is the first element.

    • Average Case: O(n) — The target element is somewhere in the middle or not in the array.

  • Worst Case: O(n) — The target element is the last element or not present.

52
New cards

Binary Search - Concept, Time Complexity, Use Case, Distinct Characteristics, Worst/Average,Best

  • Concept: Binary search operates on a sorted list. It repeatedly divides the search interval in half until the target element is found or the interval is empty.

  • Time Complexity: O(log ⁡n)

  • Use Case: Ideal for large, sorted datasets.

  • Distinct Characteristics:

    • Requires a sorted array.

    • Divides the search interval in half repeatedly.

    • Efficient, logarithmic time complexity.

  • Binary Search Worst, Average and Best

    • Best Case: O(1) — The target element is the middle element.

    • Average Case: O(log ⁡n) — The target element is not immediately found but within the sorted array.

    • Worst Case: O(log⁡ n) — The target element is at the extreme ends or not present.

53
New cards

Interpolation Search - Concept, Time Complexity, Use Case, Worst/Average,Best

  • Concept: Similar to binary search but works on uniformly distributed data. It estimates the position of the target element based on the value.

  • Time Complexity: O(log ⁡log ⁡n) in the best case, O(n) in the worst case.

  • Use Case: Effective when the data is uniformly distributed.

  • Interpolation Search Worst, Average and Best

    • Best Case: O(1) — The target element is exactly where the interpolation suggests.

    • Average Case: O(log ⁡log⁡ n) — Uniformly distributed data.

  • Worst Case: O(n) — Highly skewed data distribution or worst interpolation.

54
New cards

Depth-First Search (DFS) - Worst, Average and Best

  • Best Case: O(1) — The target node is found immediately.

  • Average Case: O(V+E)— Typically when all nodes and edges must be explored.

  • Worst Case: O(V+E) — The target node is the last one discovered.

55
New cards

Breadth-First Search (BFS) - Worst, Average and Best

  • Best Case: O(1) — The target node is the root or the first node checked.

  • Average Case: O(V+E) — All nodes and edges need to be explored.

  • Worst Case: O(V+E) — The target node is the last one explored.

56
New cards

Sorting Algorithms

  • Bubble Sort

  • Selection Sort

  • Insertion Sort

  • Merge Sort

  • Quick Sort

  • Heap Sort

  • Counting Sort

  • Radix Sort

  • Bucket Sort

  • Shell Sort

57
New cards

Bubble Sort - Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Simple but inefficient for large datasets. Best used for educational purposes or small lists.

  • Distinct Characteristics:

    • Repeatedly swaps adjacent elements if they are in the wrong order.

    • Simple, but inefficient for large datasets.

    • “Bubbles” the largest element to the end of the list.

  • Bubble Sort Worst, Average and Best

    • Best Case: O(n) — The array is already sorted (with an optimized version that stops early).

    • Average Case: O(n2) — Average case with random elements.

    • Worst Case: O(n2) — The array is sorted in reverse order.

  • Bubble: Look for something that swaps so the result can “bubble” to the top. (Swap, Exchange)

58
New cards

Selection Sort - Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Inefficient for large lists, but useful when memory writes are more expensive than comparisons.

  • Distinct Characteristics:

    • Finds the minimum element and swaps it with the first unsorted element.

    • Reduces the problem size by one in each iteration.

    • Always performs O(n2) comparisons, regardless of input.

  • Selection Sort Worst, Average and Best

    • Best Case: O(n2) — Selection sort does not improve with better input, always O(n2).

    • Average Case: O(n2) — Average case with random elements.

    • Worst Case: O(n2) — Selection sort is insensitive to input order.

  • Selection: Look for code that repeatedly finds the minimum (or maximum) element and moves it to the beginning (or end) of the list. (Select minimum, Swap with start)

59
New cards

Insertion Sort - Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Good for small or nearly sorted lists.

  • Distinct Characteristics:

    • Builds a sorted list one element at a time.

    • Efficient for small or nearly sorted datasets.

    • Shifts elements to make space for the current element.

  • Insertion Sort Worst, Average and Best

    • Best Case: O(n) — The array is already sorted.

    • Average Case: O(n2) — Average case with random elements.

    • Worst Case: O(n2) — The array is sorted in reverse order.

  • Insertion: Look for code that builds a sorted portion of the list one element at a time by inserting each new element into its correct position within the already-sorted part. (Insert, Shift Element)

60
New cards

Merge Sort - Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Efficient and stable; good for large datasets.

  • Distinct Characteristics:

    • Divides the list into halves, sorts each half, and then merges them.

    • Stable and efficient for large datasets.

    • Requires additional space for merging.

  • Merge Sort Worst, Average and Best

    • Best Case: O(n log⁡ n) — Merge sort’s time complexity is the same in all cases.

    • Average Case: O(n log⁡ n).

    • Worst Case: O(n log⁡ n).

  • Merge: Look for something that continually splits a list in half. (Merge, Split)

61
New cards

Quick Sort - Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Often faster in practice than merge sort but less stable.

  • Distinct Characteristics:

    • Selects a “pivot” element and partitions the array around it.

    • Recursively sorts the partitions.

    • Efficient, but can degrade to O(n2) if poor pivot selection occurs.

  • Quicksort Worst, Average and Best

    • Best Case: O(n log⁡ n) — The pivot splits the array into two nearly equal halves.

    • Average Case: O(n log⁡ n) — Average case with random pivots.

    • Worst Case: O(n2) — The pivot is always the smallest or largest element, leading to unbalanced partitions.

  • Quick Sort: Look for the keywords “pivot” and/or “split”. (Pivot, Split)

62
New cards

Heap Sort - Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Useful when memory usage is a concern as it’s an in-place algorithm.

  • Distinct Characteristics:

    • Utilizes a binary heap data structure.

    • Builds a max-heap and repeatedly extracts the maximum element.

    • Efficient and in-place, but not stable.

  • Heap Sort Worst, Average and Best

    • Best Case: O(n log⁡ n) — Heap sort’s time complexity is the same in all cases.

    • Average Case: O(n log⁡ n).

    • Worst Case: O(n log⁡ n).

  • Heap Sort: Look for code that uses a heap data structure to repeatedly extract the maximum (or minimum) element and rebuilds the heap. (Heapify, Extract Max, Build Heap)

63
New cards

Counting Sort -Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Efficient for sorting integers or other items with a small range of possible values.

  • Distinct Characteristics:

    • Non-comparative sorting.

    • Counts occurrences of each element and uses this information to place elements.

    • Efficient for small ranges of integers.

  • Counting Sort Worst, Average and Best

    • Best Case: O(n+k)— k is the range of the input.

    • Average Case: O(n+k).

    • Worst Case: O(n+k).

64
New cards

Radix Sort -Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Effective for sorting large numbers or strings with a fixed length.

  • Distinct Characteristics:

    • Sorts numbers by processing individual digits.

    • Non-comparative, stable, and efficient for specific data types.

    • Often combined with counting sort.

  • Radix Sort Worst, Average and Best

    • Best Case: O(n*k) — k is the number of digits in the largest number.

    • Average Case: O(n*k).

    • Worst Case: O(n*k).

  • Radix Sort: Look for code that sorts numbers based on their individual digits, starting from the least significant digit (LSD) or the most significant digit (MSD). (Count, Frequency, Sum)

65
New cards

Bucket Sort -Use Case, Distinct Characteristics, Worst/Average/Best

  • Use Case: Good for uniformly distributed data.

  • Distinct Characteristics:

    • Distributes elements into buckets and sorts each bucket individually.

    • Efficient when the input is uniformly distributed.

    • Often combined with another sorting algorithm like insertion sort.

  • Bucket Sort Worst, Average and Best

    • Best Case: O(n+k) — k is the number of buckets; assumes uniform distribution.

    • Average Case: O(n+k).

    • Worst Case: O(n2) — All elements end up in one bucket (degenerate case).

  • Bucket: Look for something that distributes the values into “buckets” where they are individually sorted. (Bucket)

66
New cards

Shell Sort - Distinct Characteristics, Worst/Average/Best

  • Distinct Characteristics:

    • Generalization of insertion sort with a gap sequence.

    • Sorts elements far apart and gradually reduces the gap.

    • Efficient for medium-sized datasets.

    • Time Complexity: Depends on the gap sequence; commonly O(n3/2).

  • Shell Sort Worst, Average and Best

    • Best Case: O(n log n) — Occurs when the array is already sorted or nearly sorted, especially when using a good gap sequence like the Knuth sequence.

    • Average Case: O(n^(3/2)) or O(n^1.5) — Highly dependent on the gap sequence used. With commonly used sequences like the Knuth sequence, the average-case complexity is approximately O(n^1.5).

    • Worst Case: O(n2) — Can degrade to O(n2), particularly with poorly chosen gap sequences like the original Shell sequence (where the gaps are halved each time).

  • Shell Sort: Look for code that sorts elements at specific intervals and gradually reduces the interval until it performs a final insertion sort. (Gap, Interval)

67
New cards

Sort - Key Observations

  • Bubble Sort, Selection Sort, and Insertion Sort: These are simple but inefficient for large datasets, especially in the worst case.

  • Merge Sort and Heap Sort: Stable and consistent in performance, regardless of the input.

  • Quick Sort: Very efficient on average but can degrade to O(n2)  in the worst case without proper pivot selection.

  • Counting Sort, Radix Sort, and Bucket Sort: Efficient for specific types of data (e.g., integers within a fixed range) but less versatile.

68
New cards

Sort - Choosing the Right Algorithm

  • Small datasets: Simpler algorithms like bubble sort, selection sort, or insertion sort might suffice.

  • Large datasets: More efficient algorithms like merge sort, quick sort, or heap sort are preferred.

  • Sorted data: Algorithms like insertion sort can be very efficient.

  • Special conditions: Use counting sort, radix sort, or bucket sort if the data is within a certain range or has other specific properties.

69
New cards

Big O Notation - What is Big O Notation and why Use Big O Notation?

What is Big O Notation?

  • Big O Notation: It provides an upper bound on the time or space complexity of an algorithm, representing the worst-case scenario. It’s a way to describe the efficiency of an algorithm as the input size grows towards infinity.

Why Use Big O Notation?

  • Comparing Algorithms: It allows us to compare the efficiency of different algorithms independently of hardware or other environmental factors.

  • Scalability: It helps us understand how an algorithm will perform as the size of the input data grows.

70
New cards

Common Big O Notations

  • O(1) - Constant Time:

    • Description: The algorithm takes the same amount of time to execute regardless of the size of the input.

    • Example: Accessing an element in an array by index.

    • Efficiency: Excellent.

  • O(log n) - Logarithmic Time:

    • Description: The runtime increases logarithmically as the input size increases. Typically occurs in algorithms that halve the problem size at each step, like binary search.

    • Example: Binary search.

    • Efficiency: Very good for large inputs.

  • O(n) - Linear Time:

    • Description: The runtime increases linearly with the size of the input. If you double the input size, the runtime also doubles.

    • Example: Linear search, iterating through a list.

    • Efficiency: Reasonable for moderate to large inputs.

  • O(n log n) - Log-Linear Time:

    • Description: The runtime increases more than linearly but less than quadratically. Common in efficient sorting algorithms like merge sort and quicksort.

    • Example: Merge sort, quicksort, heap sort.

    • Efficiency: Efficient for large inputs.

  • O(n2) - Quadratic Time:

    • Description: The runtime increases quadratically with the size of the input. If you double the input size, the runtime quadruples.

    • Example: Bubble sort, insertion sort, selection sort (for unsorted arrays).

    • Efficiency: Poor for large inputs.

  • O(2n) - Exponential Time:

    • Description: The runtime doubles with each additional element in the input. Common in algorithms that solve problems by brute force or explore all possible solutions.

    • Example: Recursive algorithms for the Fibonacci sequence, certain dynamic programming problems.

    • Efficiency: Very poor, impractical for large inputs.

  • O(n!) - Factorial Time:

    • Description: The runtime increases factorially with the size of the input. Common in algorithms that generate all permutations of an input set.

    • Example: Traveling salesman problem via brute force.

    • Efficiency: Extremely poor, infeasible for even moderate input sizes.

71
New cards

Big O - Time Complexity - Worst, Average, Best

Best Case, Worst Case, and Average Case

  • Best Case: The scenario where the algorithm performs the minimum possible number of operations. It’s often less relevant because it’s optimistic.

    • Example: For linear search, the best case is O(1), where the target element is the first one in the array.

  • Worst Case: The scenario where the algorithm performs the maximum possible number of operations. Big O notation typically describes the worst-case complexity.

    • Example: For linear search, the worst case is O(n), where the target element is the last one in the array or isn’t present at all.

  • Average Case: The scenario that represents the expected number of operations for a typical input. It’s more complex to calculate because it depends on the distribution of inputs.

    • Example: For linear search, the average case is O(n/2), but in Big O notation, we simplify this to O(n).

72
New cards

Big O - Calculation Rules

Ignore Constants:

  • Rule: In Big O notation, constant factors are ignored.

  • Why: Big O notation focuses on the growth rate as the input size (n) increases, so a constant multiplier doesn't affect the growth rate.

  • Example: O(2n) simplifies to O(n)

Focus on the Dominant Term:

  • Rule: Only the term with the highest growth rate is considered.

  • Why: As n becomes large, the term with the highest growth rate will dominate the others.

  • Example: O(n2+n) simplifies to O(n2)

Drop Lower Order Terms:

  • Rule: Lower-order terms are ignored because they become insignificant as n grows.

  • Why: Similar to focusing on the dominant term, lower-order terms have a negligible impact on large inputs.

  • Example: O(n2 + n + log n)  simplifies to O(n2)

Multiplicative Constants Can Be Ignored:

  • Rule: Coefficients that multiply variables (e.g., 2n, 3n^2) are ignored.

  • Why: Like constants, they don't change the growth rate.

  • Example: O(3n2) simplifies to O(n2)

Additive Constants Can Be Ignored:

  • Rule: Constant terms that don't depend on n are ignored.

  • Why: They don't affect the overall growth rate as n increases.

  • Example: O(n+10) simplifies to O(n)

Logarithms with Different Bases:

  • Rule: Logarithms with different bases can be considered equivalent in Big O notation.

  • Why: Changing the base of a logarithm only introduces a constant factor, which is ignored in Big O notation.

  • Example: O(log⁡2 n) simplifies to O(log⁡ n)

Non-Dominant Polynomial Terms:

  • Rule: In polynomials, only the highest degree term is considered.

  • Why: As n grows large, the highest degree term will dominate.

  • Example: O(5n3 + 2n2 + 7) simplifies to O(n3)

Exponential Growth:

  • Rule: Exponential growth functions dominate polynomial functions.

  • Why: Exponential functions grow much faster than polynomial functions as n increases.

  • Example: O(2n + n3) simplifies to O(2n)

Nested Loops:

  • Rule: The time complexity of nested loops is the product of the complexities of each loop.

  • Why: Each loop iterates based on the input size, so their combined effect is multiplicative.

  • Example: A loop inside another loop both running n times results in O(n * n) = O(n2)

Sequential Statements:

  • Rule: If two independent statements (or loops) are executed sequentially, their time complexities are added.

  • Why: Sequential operations don't multiply time complexity, but rather add up.

  • Example: Two loops each running n times sequentially result in O(n + n) = O(n)

73
New cards
74
New cards
75
New cards
76
New cards