1/32
My Version of C949 Study Guide Flashcards | Explains Algorithms
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Finiteness
An algorithm must always have a finite number of steps before it ends. When the operation is finished, it must have a defined endpoint or output and not enter an endless loop.
Definiteness
An algorithm needs to have exact definitions for each step. Clear and straightforward directions ensure that every step is understood and can be taken easily.
Effectiveness
An algorithm’s stages must be sufficiently straightforward to be carried out in a finite time utilizing fundamental operations. With the resources at hand, every operation in the algorithm should be doable and practicable.
Generality
Rather than being limited to a single particular case, an algorithm should be able to solve a group of issues. It should offer a generic fix that manages a variety of inputs inside a predetermined range or domain.
Modularity
This feature was perfectly designed for the algorithm if you are given a problem and break it down into small-small modules or small-small steps, which is a basic definition of an algorithm.
Correctness
An algorithm's correctness is defined as when the given inputs produce the desired output, indicating that the algorithm was designed correctly. An algorithm's analysis has been completed correctly.
Maintainability
It means that the algorithm should be designed in a straightforward, structured way so that when you redefine the algorithm, no significant changes are made to the algorithm.
Functionality
It takes into account various logical steps to solve a real-world problem.
Robustness
Robustness refers to an algorithm's ability to define your problem clearly.
User-friendly
If the algorithm is difficult to understand, the designer will not explain it to the programmer.
Simplicity
If an algorithm is simple, it is simple to understand.
Extensibility
Your algorithm should be extensible if another algorithm designer or programmer wants to use it.
Brute Force Algorithm
A straightforward approach that exhaustively tries all possible solutions, suitable for small problem instances but may become impractical for larger ones due to its high time complexity.
Recursive Algorithm
A method that breaks a problem into smaller, similar subproblems and repeatedly applies itself to solve them until reaching a base case, making it effective for tasks with recursive structures.
Encryption Algorithm
Utilized to transform data into a secure, unreadable form using cryptographic techniques, ensuring confidentiality and privacy in digital communications and transactions.
Backtracking Algorithm
A trial-and-error technique used to explore potential solutions by undoing choices when they lead to an incorrect outcome, commonly employed in puzzles and optimization problems.
Searching Algorithm
Designed to find a specific target within a dataset, enabling efficient retrieval of information from sorted or unsorted collections.
Sorting Algorithm
Aimed at arranging elements in a specific order, like numerical or alphabetical, to enhance data organization and retrieval.
Hashing Algorithm
Converts data into a fixed-size hash value, enabling rapid data access and retrieval in hash tables, commonly used in databases and password storage.
Divide and Conquer Algorithm
Breaks a complex problem into smaller subproblems, solves them independently, and then combines their solutions to address the original problem effectively.
Greedy Algorithm
Makes locally optimal choices at each step in the hope of finding a global optimum, useful for optimization problems but may not always lead to the best solution.
Base Case
This is the condition under which the recursion stops. It represents the simplest instance of the problem, which can be solved directly without further recursion.
Recursive Case
This is the part of the algorithm that breaks the problem down into smaller instances of the same problem and then calls the algorithm recursively on these smaller instances.
Linear Search
a sequential search algorithm that checks each element in a list one by one until a match is found or the entire list has been searched. It is simple to implement but can be inefficient for large datasets since it has a time complexity of O(n).
It takes longer as the n data size increases.
Time Complexity: O(N) Linear
Binary Search
a more efficient searching algorithm that requires a sorted list as input. It works by repeatedly dividing the search space in half, comparing the middle element with the target value, and eliminating half of the remaining elements based on the comparison.
Much faster than Linear Search for large datasets.
Time Complexity: O(log n) Logarithmic
Bubble Sort
a simple sorting algorithm that repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order.
While easy to understand and implement, it has a time complexity of O(n²), making it inefficient for large datasets.
The algorithm gets its name from the way smaller elements _____ to the top of the list with each iteration.
Time Complexity: O(n ² ) Quadratic
Selection Sort
a sorting algorithm that works by repeatedly finding the minimum element from the unsorted portion of the list and placing it at the beginning.
The algorithm maintains two subarrays: one that is sorted and one that is unsorted. Like Bubble Sort, Selection Sort has a time complexity of O(n²), but it generally performs better than Bubble Sort as it makes fewer swaps.
Time Complexity: O(n ² ) Quadratic
Insertion Sort
a sorting algorithm that builds the final sorted array one element at a time.
It works by taking elements from the unsorted part and inserting them into their correct position in the sorted part of the array.
While it also has a time complexity of O(n²), performs well on small datasets and is particularly efficient when dealing with arrays that are already partially sorted.
Time Complexity: O(n ² ) Quadratic
Merge Sort
A highly efficient divide-and-conquer sorting algorithm works by dividing the array into smaller subarrays, sorting them, and then merging them back together.
It consistently performs well with a time complexity of O(n log n), making it much faster than the simpler sorting algorithms for large datasets.
Although it requires additional memory space, it is stable and predictable, making it a popular choice for sorting linked lists and in situations requiring stable sorting.
Time Complexity: O(n log n) Linearithmic
Quick Sort
efficient divide-and-conquer sorting algorithm that works by selecting a 'pivot' element and partitioning the array around it, with smaller elements placed before the pivot and larger elements after it.
This process recursively applies to the sub-arrays until the entire array is sorted.
performs extremely well in practice due to its efficient cache usage, its worst-case time complexity is O(n²), though this can be mitigated with proper pivot selection strategies.
Time Complexity: O(n ² ) Quadratic
Bucket Sort
distribution-based sorting algorithm that works by distributing elements into a number of buckets, then sorting these buckets individually.
It assumes uniform distribution of the input data across buckets and performs well when this assumption holds true.
The algorithm has an average time complexity of O(n + k), where k is the number of ______
Time Complexity: O(n + k) Linear (where k is the number of buckets)
Heap Sort
comparison-based sorting algorithm that uses a binary heap data structure to sort elements.
It works by first building a max-heap from the input array, then repeatedly extracting the maximum element and rebuilding the heap until all elements are sorted.
While it has a consistent time complexity of O(n log n) and sorts in-place, it typically performs slower than Quick Sort in practice due to poor cache performance.
Time Complexity: O(n log n) Linearithmic
Shell Sort
Generalization of insertion sort with a gap sequence.
Sorts elements far apart and gradually reduces the gap.
Efficient for medium-sized datasets.
Time Complexity: Depends on the gap sequence; commonly O(n3/2).
Time Complexity: Gap sequence, insertion-like; O(n^3/2).