C949 V4 Full Study Guide

0.0(0)
studied byStudied by 1 person
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/222

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

223 Terms

1
New cards

Finiteness (Characteristic)

An algorithm must always have a finite number of steps before it ends. When the operation is finished, it must have a defined endpoint or output and not enter an endless loop

2
New cards

Definiteness (Characteristic)

An algorithm needs to have exact definitions for each step. Clear and straightforward directions ensure that every step is understood and can be taken easily

3
New cards

Input (Characteristics)

An algorithm requires one or more inputs. The values that are first supplied to the algorithms before its processing are known as inputs. These inputs come from a predetermined range of acceptable values

4
New cards

Output (Characteristics)

One or more outputs must be produced by an algorithm. The output is the outcome of the algorithm after every step has been completed. The relationship between the input and results should be clear

5
New cards

Effectiveness (Characteristics)

An algorithm's stages must be sufficiently straightforward to be carried out in a finite time utilizing fundamental operations. With the resources at hand, every opertation in the algorithm should be doable and practicable.

6
New cards

Generality (Characteristic)

Rather than being limited to a signle particular case, an algorithm should be able to solve a group of issues. It should offer a generic fix that manages a variety of inputs inside a predetermined range or domain

7
New cards

Modularity (Factors)

This feature was perfectly designed for the algorithm if you are given a problem and break it down into small-small modules or small-small steps, which is a basic definition of an algorithm

8
New cards

Correctness (Factors)

An algorithm's correctness is defined as when the given inputs produce the desired output, indicating that the algorithm was designed correctly. An algorithm's analysis has been completed correctly

9
New cards

Maintainability (Factors)

It means that the algorithm should be designed in a straightforward structured way so that when you redefine the algorithm, no significant changes are made to the algorithm

10
New cards

Functionality (Factors)

It takes into account variable logical steps to solve a real-world problem

11
New cards

Robustness (Factors)

Robustness refers to an algorithm's ability to define your problem clearly

12
New cards

User-friendly (Factors)

If the algorithm is difficult to understand, the designer will not explain it to the programmer

13
New cards

Simplicity (Factors)

If an algorithm is simple, it is simple to understand

14
New cards

Extensibility (Factors)

Your algorithm should be extensible if another algorithm designer or programmer wants to use it

15
New cards

Brute Force Algorithm (Type of Algorithm)

A straightforward approach that exhaustively tries all possible solutions, suitable for small problem instances but may become impractical for larger ones due to its time complexity

16
New cards

Recursive Algorithm (Type of Algorithm)

A method that breaks a problem into smaller, similar subproblems and repeatedly applies itself to solve them until reaching a base case, making it effective for tasks with recursive structures

17
New cards

Encryption Algorithm (Type of Algorithm)

Utilized to transform data into a secure, unreadable form using cryptographic techniques, ensuring confidentiality and privacy in digital communications and transactions

18
New cards

Backtracking Algorithm (Type of Algorithm)

A trial-and-error technique used to explore potential solutions by undoing choices when they lead to an incorrect outcome, commonly employed in puzzles and optimization problems

19
New cards

Searching Algorithm (Type of Algorithm)

Designed to find a specific target within a dataset, enabling efficient retrieval of information from sorted or unsorted collections

20
New cards

Sorting Algorithm (Type of Algorithm)

Aimed at arranging elements in a specific order, like numerical or alphabetical, to enhance data organization and retrieval.

21
New cards

Hashing Algorithm (Type of Algorithm)

Converts data into a fixed-size hash value, enabling rapid data access and retrieval in hash tables, commonly used in databases and password storage

22
New cards

Divide and Conquer Algorithm (Type of Algorithm)

Breaks a complex problem into smaller subproblems, solves them independently, and then combines their solutions to address the original problem effectively

23
New cards

Greedy Algorithm

Makes locally optimal choices at each step in the hope of finding a global optimum, useful for optimization provlems but may not always lead to the best solution

24
New cards

Dynamic Programming Algorithm (Type of Algorithm)

Stores and reuses intermediate results to avoid redundant computations, enhancing the efficiency of solving complex problems

25
New cards

Randomized Algorithm (Type of Algorithm)

Utilizes randomness in its steps to achieve a solution, often used in situations where an approximate or probabilistic answer suffices.

26
New cards

Recursive Algorithms

A recursive algorithm is one that solves a problem by breaking it down into smaller instances of the same problem, which it then solves in the same way. This process continues until the problem is reduced to a base case, which is solved directly without further recursion.

27
New cards

Base Case

This is the condition under which the recursion stops. It represents the simplest instance of the problem, which can be solved directly without further recursion

28
New cards

Recursive Case

This is the part of the algorithm that breaks the problem down into smaller instances of the same problem and then calls the algorithm recursively on these smaller instances

29
New cards

Stack (Recursive Algorithms)

Each recursive call is placed on the system call stack. When the base case is reached, the stack begins to unwind as each instance of the function returns its result

30
New cards

Advantages of Recursion

Simplicity: Recursive solutions are often more elegant and easier to understand than their iterative counterparts.

Direct Translation: Some problems are naturally recursive, like tree traversals, making recursion the most straightforward approach

31
New cards

Disadvantages of Recursion

Performance: Recursive algorithms can be less efficient due to the overhead of multiple function calls and potential stack overflow issues for deep recursion.

Memory Usage: Recursion can consume more memory because each function call adds a new frame to the call stack.

32
New cards

When to use recursion

When a problem can naturally be divided into similar sub-problems (e.g., tree traversal, searching algorithms like binary search).

When the recursive solution is significantly simpler or more intuitive than an iterative one.

33
New cards

Linear Search (Concept)

Linear search is the simplest search algorithm.

It works by sequentially checking each element of the array or list until the target element is found or the end of the collection is reached.

34
New cards

Linear Search Algorithm

1. Start from the first element of the array.

2. Compare the current element with the target element.

3. If they match, return the index of the element.

4. If they don't match, move to the next element and repeat the process.

5. If the target element is not found by the end of the array, return a "not found" indication.

35
New cards

Time Complexity of Linear Search

O(n), where n is the number of elements in the array

36
New cards

When to use linear search algorithm

When the array or list is small.

When the array is unsorted.

When simplicity is more important than performance.

37
New cards

Binary Search Concept

Binary search is much more efficient than linear search but requires the array or list to be sorted.

It works by repeatedly dividing the search interval in half. If the target value is less than the middle element, the search continues in the left half, otherwise in the right half.

38
New cards

Binary Search Algorithm

1. Start with two pointers, one at the beginning (low) and one at the end (high) of the sorted array.

2. Find the middle element of the current interval.

3. Compare the middle element with the target:

-If they match, return the index of the middle element.

-If the target is less than the middle element, repeat the search on the left half.

If the target is greater, repeat the search on the right half.

4. If the interval becomes invalid (low > high), return a "not found" indication.

39
New cards

Time complexity of Binary Search

O(log n), where n is the number of elements in the array. This logarithmic time complexity makes binary search significantly faster than linear search for large datasets

40
New cards

Linear Search Best Case

O(1) - The target element is the first element

41
New cards

Linear Search Average Case

O(n) - The target element is somewhere in the middle or not in the array

42
New cards

Linear Search Worst Case

O(n) - The target element is the last element or not present

43
New cards

Binary Search Best Case

O(1) - The target element is the middle element

44
New cards

Binary Search Average Case

O(log n) - The target element is not immediateley found but within the sorted array

45
New cards

Binary Search Worst Case

O(log n) - The target element is at the extreme ends or not present

46
New cards

Interpolation Search Concept

Similar to binary search but works on uniformly distributed data. It estimates the position of the target element based on the value.

47
New cards

Interpolation Search Time Complexity

O(log ⁡log ⁡n) in the best case, O(n) in the worst case

48
New cards

Interpolation Search Best Case

O(1) — The target element is exactly where the interpolation suggests

49
New cards

Interpolation Search Average Case

O(log log n) - Uniformly distributed data

50
New cards

Interpolation Search Worst Case

O(n) - Highly skewed data distribution or worst interpolation

51
New cards

Depth-First Search (DFS) and Breadth-First Search (BFS) Concept

Used primarily in graph and tree data structures. DFS explores as far as possible along one branch before backtracking, while BFS explores all neighbors at the present depth before moving on to nodes at the next depth level.

52
New cards

DFS and BFS Time Complexity

O(V+E), where V is the number of vertices and E is the number of edges.

53
New cards

DFS/BFS Use Case

Useful for searching nodes in graphs and trees

54
New cards

DFS Best Case

O(1) -The traget node is found immediately

55
New cards

DFS Average Case

O(V+E) - Typically when all nodes and edges must be explored

56
New cards

DFS Worst Case

O(V+E) - The target node is the last one discovered

57
New cards

BFS Best Case

O(1) - The target node is the root or the first node checked

58
New cards

BFS Average Case

O(V+E) - All nodes and edges need to be explored

59
New cards

BFS Worst Case

O(V+E) -The target node is the last one explored

60
New cards

Bubble Sort Distinct Characteristics

Repeatedly swaps adjacent elements if they are in the wrong order.

Simple, but inefficient for large datasets.

"Bubbles" the largest element to the end of the list.

61
New cards

Bubble Sort Use Case

Simple but inefficient for large datasets. Best used for educational purposes or small lists.

62
New cards

Bubble Sort Best Case

O(n) - The array is already sorted (with an optimized version that stops early)

63
New cards

Bubble Sort Average Case

O(n²) - Average case with random elements

64
New cards

Bubble Sort Worst Case

O(n²) - The array is sorted in reverse order

65
New cards

Selection Sort Use Case

Inefficient for large lists, but useful when memory writes are more expensive than comparisons

66
New cards

Selection Sort Distinct Characteristics

● Finds the minimum element and swaps it with the first unsorted element.

● Reduces the problem size by one in each iteration.

● Always performs O(n2) comparisons, regardless of input

67
New cards

Selection Sort Best Case

O(n2) — Selection sort does not improve with better input, always O(n2).

68
New cards

Selection Sort Average Case

O(n2) — Average case with random elements.

69
New cards

Selection Sort Worst Case

○ O(n2) — Selection sort is insensitive to input order

70
New cards

Insertion Sort Use Case

Good for small or nearly sorted lists

71
New cards

Insertion Sort Distinct Characteristics

● Builds a sorted list one element at a time.

● Efficient for small or nearly sorted datasets.

● Shifts elements to make space for the current element

72
New cards

Insertion Sort Best Case

O(n) — The array is already sorted

73
New cards

Insertion Sort Average Case

Average case with random elements

74
New cards

Insertion Sort Worst Case

The array is sorted in reverse order

75
New cards

Merge Sort Use Case

Efficient and stable, good for large datasets

76
New cards

Merge Sort Distinct Characteristics

● Divides the list into halves, sorts each half, and then merges them.

● Stable and efficient for large datasets.

● Requires additional space for merging.

77
New cards

Merge Sort Best Case

O(n log⁡ n) — Merge sort's time complexity is the same in all cases.

78
New cards

Merge Sort Average Case

O(n log⁡ n)

79
New cards

Merge Sort Worst Case

O(n log⁡ n)

80
New cards

Quicksort Use Case

Often faster in practice than merge sort but less stable

81
New cards

Quicksort Distinct Characteristics

● Selects a "pivot" element and partitions the array around it.

● Recursively sorts the partitions.

● Efficient, but can degrade to O(n2) if poor pivot selection occurs.

82
New cards

Quick Sort Best Case

O(n log⁡ n) — The pivot splits the array into two nearly equal halves.

83
New cards

Quicksort Average Case

O(n log⁡ n) — Average case with random pivots.

84
New cards

Quicksort Worst Case

O(n2) — The pivot is always the smallest or largest element, leading to unbalanced partitions.

85
New cards

Quicksort

Look for the keywords "pivot" and/or "split". (Pivot, Split)

86
New cards

Heap Sort Use Case

Useful when memory usage is a concern as it's an in-place algorithm

87
New cards

Heap Sort Distinct Characteristics

Utilizes a binary heap data structure.

Builds a max-heap and repeatedly extracts the maximum element.

Efficient and in-place, but not stable.

88
New cards

Heap Sort Best Case

O(n log⁡ n) — Heap sort's time complexity is the same in all cases.

89
New cards

Heap Sort Average Case

O(n log⁡ n)

90
New cards

Heap Sort Worst Case

O(n log⁡ n)

91
New cards

Heap Sort

Look for code that uses a heap data structure to repeatedly extract the maximum (or minimum) element and rebuilds the heap. (Heapify, Extract Max, Build Heap)

92
New cards

Counting Sort Use Case

Efficient for sorting integers or other items with a small range of possible values

93
New cards

Counting Sort Distinct Characteristics

Non-comparative sorting.

Counts occurrences of each element and uses this information to place elements.

Efficient for small ranges of integers.

94
New cards

Counting Sort Best Case

O(n+k)— k is the range of the input.

95
New cards

Counting Sort Average Case

O(n+k)

96
New cards

Counting Sort Worst Case

O(n+k)

97
New cards

Radix Sort Use Case

Effective for sorting large numbers or strings with a fixed length

98
New cards

Radix Sort Distinct Characteristics

Sorts numbers by processing individual digits.

Non-comparative, stable, and efficient for specific data types.

Often combined with counting sort.

99
New cards

Radix Sort Best Case

O(n*k) — k is the number of digits in the largest number

100
New cards

Radix Sort Average Case

O(n*k)