1/130
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
An algorithm must complete in a finite number of steps and not run indefinitely.
Each step must be precisely and unambiguously defined.
One or more initial values that feed into the algorithm, taken from a defined domain.
The algorithm must produce at least one result (output) related to the input.
All operations should be basic enough to be performed in finite time with available resources.
The algorithm should solve a broad set of problems (not just a single case).
Breaking a problem into smaller modules or steps, aiding clarity and organization.
It must produce the desired output for all valid inputs; logically verified.
The structure should be clear so updates can be made with minimal changes.
It addresses a real-world problem with clear, logical steps.
The algorithm can handle unexpected or invalid inputs without crashing.
The logic is clear enough to be easily explained to others.
A simpler approach is easier to implement, read, and debug.
It can be adapted or extended for new needs with minimal refactoring.
Tries all possible solutions exhaustively; simple but can be very inefficient for large inputs.
Solves a problem by breaking it into smaller instances of itself until a base case is reached.
Transforms data into a secure, unreadable form to ensure confidentiality.
Explores partial solutions, backtracking when a path leads to a dead end (common in puzzles).
Finds a target item within a collection; e.g., linear search, binary search.
Rearranges data into a specified order, e.g., ascending or descending.
Converts data into a fixed-size value (hash) for fast lookups in a hash table.
Splits the problem into smaller subproblems, solves them, and combines their solutions.
Locally optimal choice at each step, hoping to reach a globally optimal solution.
Saves intermediate results to avoid repeated calculations (overlapping subproblems).
Incorporates randomness in its process, often for approximations or probabilistic solutions.
Consist of a Base Case (stops recursion), a Recursive Case (smaller subproblem), and a Call Stack.
n! = n × (n−1)!, with 0! = 1 as the base case.
Often more elegant and natural for tree-like or divide-and-conquer problems.
Can cause extra function-call overhead and potential stack overflows if depth is large.
Checks each element sequentially until the target is found or the list ends; O(n) time worst case.
Useful for small or unsorted datasets, or when simplicity is more important than performance.
Requires a sorted list; repeatedly halves the search range to find the target in O(log n) time.
When data is sorted and you need faster lookups (logarithmic vs. linear).
Linear Search: O(n); Binary Search: O(log n). Binary is faster but requires sorting.
Estimates the position of the target based on its value; average O(log log n), worst O(n).
DFS goes deep along a path before backtracking; BFS visits neighbors level by level. Both O(V+E).
Repeatedly swaps adjacent elements if out of order; worst/average O(n²), best O(n) if nearly sorted.
Finds the minimum and swaps it with the first unsorted position; O(n²) in all cases.
Builds a sorted portion by inserting each new element into its correct spot; O(n²) worst/avg, O(n) best.
Divide-and-conquer: split the array, recursively sort halves, then merge; O(n log n) in all cases.
Stable sorting, consistently O(n log n) in best, average, and worst cases.
Partitions around a pivot and recursively sorts each side; average O(n log n), worst O(n²).
A poor pivot leads to unbalanced partitions and worst-case O(n²) performance.
Builds a heap, repeatedly removes the root (max or min), and re-heapifies; O(n log n), in-place, not stable.
Counts occurrences of each distinct value (within a limited range); O(n + k).
Sorts digit by digit (or character by character) using another stable sort; O(n·k).
Distributes elements into buckets, sorts each bucket individually; average O(n + k), worst O(n²).
A generalized insertion sort using a gap sequence; typical average ~O(n^(3/2)), depends on gap choice.
Describes the upper bound of an algorithm’s time (or space) complexity as input size grows.
O(1), O(log n), O(n), O(n log n), O(n²), O(2^n), O(n!).
Ignore constants, keep the dominant term, treat different log bases as the same, etc.
Best: minimal operations, Worst: maximal operations, Average: typical scenario (often used).
Data Type: defines kind of data (e.g., int); Data Structure: organizes data (e.g., arrays, linked lists).
A model specifying operations and behavior, not implementation details.
A type with a fixed set of named constants; enhances readability and type safety.
A contiguous block of memory for homogeneous elements; O(1) index access, O(n) inserts in the middle.
Access O(1), linear search O(n), insertion/deletion at end O(1) amortized (dynamic array), at index O(n).
Nodes linked by pointers; dynamic size, O(n) to access by index, easy insertion/deletion at head.
Access O(n), insert at head O(1), insert at tail O(n) unless tail pointer is maintained.
LIFO structure with push() and pop() in O(1) average time.
Function call stack, undo mechanisms, backtracking, depth-first search.
FIFO structure with enqueue() and dequeue() in O(1) average time.
Used in scheduling, breadth-first search, buffering (e.g., printer queues).
Double-ended queue allowing insertion/removal at both front and rear in O(1).
Useful for sliding window problems, scheduling, or any scenario needing both ends accessible.
A collection that allows duplicates; used for counting occurrences (e.g. word counts).
Maps keys to indices via a hash function; average O(1) for insert, search, delete.
Deterministic, uniform distribution of keys, and quick to compute.
Chaining (linked lists in buckets) or open addressing (linear/quadratic probing, double hashing).
Hierarchical structure with nodes (root, children, leaves); can be binary, n-ary, etc.
Full (0 or 2 children), Complete (filled levels left to right), Perfect (all leaves same level), Balanced (height difference ≤ 1).
Left subtree < node < right subtree; in-order traversal is sorted.
Insert, search, delete in average O(log n); can degrade if not balanced.
Pre-order (NLR), In-order (LNR), Post-order (LRN), Level-order (BFS by levels).
A self-balancing BST that keeps subtree heights within 1; uses rotations if unbalanced.
A BST with nodes colored red or black to maintain balanced height; O(log n) operations.
A self-balancing tree with multiple children per node, often used in databases and filesystems.
A complete binary tree satisfying heap property (max-heap or min-heap).
Insert O(log n), remove-root O(log n), peek O(1), heapify O(n).
For node at index i: left child = 2i+1, right child = 2i+2 (0-based indexing).