1/61
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
A direct recursive implementation, fib(n) { return fib(n-1) + fib(n-2); }, is the most computationally efficient way to calculate the Nth Fibonacci number because its code is the shortest and most closely matches the mathematical definition.
Incorrect
This approach has an exponential time complexity (O(2^n)) because it re-calculates the same values many, many times.
When using that simple recursive function to calculate fib(10), the specific sub-problem fib(3) will be calculated only one time.
Incorrectfib(3) will be calculated independently for fib(5), fib(6), fib(7), and so on, leading to many redundant computations.
The fundamental reason the naive recursive approach is slow is that it has to perform a very deep chain of function calls (e.g., fib(n) calls fib(n-1), which calls fib(n-2), etc.).
Incorrect
The depth of recursion is only n (O(n) space); the slowness comes from the breadth of redundant calls at each level.
If we store the result of each Fibonacci number in a cache (like an array or a hash map) the first time we compute it, we can improve the time complexity from exponential to linear (O(n)).
Correct
This technique (memoization) ensures every sub-problem is solved only once, eliminating the redundant work.
To calculate fib(n) without recursion, you must build an array of size n+1 to store all the Fibonacci numbers from 0 to n.
Incorrect
You only ever need the last two numbers to calculate the next one, so you can solve it using just two variables (O(1) space).
For any given n > 1, fib(n) will always be an odd number if n is odd, and an even number if n is even.
Incorrect
The pattern is actually (Odd, Odd, Even, Odd, Odd, Even, …); an even number appears every third position in the sequence.
A standard 64-bit integer (long long in C++, long in Java) is a safe and sufficient data type to store the result of fib(1000).
Incorrect
Fibonacci numbers grow exponentially, and fib(1000) is an enormous number that would overflow even a 64-bit integer, requiring a BigInteger library.
The core issue of \"re-computing the same sub-problems\" that we identified in the naive recursive solution is a problem-solving pattern that is unique to the Fibonacci sequence.
Incorrect
This pattern, known as \"overlapping subproblems,\" is the key indicator that a problem is a great candidate for a powerful technique called Dynamic Programming.
A naive recursive Fibonacci implementation without memoization runs in exponential time, specifically O(φⁿ).
Correct
Each call spawns two subcalls and the recursion tree grows roughly like the golden ratio φ.
Adding memoization to the recursive Fibonacci reduces its time complexity to linear, O(n).
Correct
Each F(k) is computed only once and then cached for O(1) lookups.
Bottom-up dynamic programming for Fibonacci takes O(n) time and O(n) space.
Correct
You fill an array of size n in one pass storing every F(i).
You can optimize bottom-up Fibonacci to O(1) space by keeping only the last two values.
Correct
F(i) depends only on F(i–1) and F(i–2), so no full array is needed.
A tail-recursive Fibonacci in a language with tail-call optimization uses O(n) time and O(1) stack space.
Correct
Tail calls can be optimized into a loop, so the recursion depth stays constant.
Matrix exponentiation (the Q-matrix) computes F(n) in O(log n) time via repeated squaring.
Correct
Exponentiation by squaring halves the exponent at each step on a 2×2 matrix.
The fast-doubling method yields F(n) in O(log n) by computing F(2k) and F(2k+1) from F(k),F(k+1).
Correct
Using identities F(2k)=F(k)*(2·F(k+1)−F(k)) and F(2k+1)=F(k+1)²+F(k)² halves n each recursion.
Binet’s formula computes F(n) exactly in O(1) time for any n using floating-point arithmetic.
Incorrect
Floating-point rounding and irrational operations break exactness for large n.
For n ≤ 70, Binet’s formula with double precision always yields the correct integer Fibonacci number.
Correct
IEEE-754 doubles have a 53-bit mantissa, enough to represent F(70)≈1.19×10¹⁴ exactly.
Computing Fibonacci mod m can be done in O(log n) time by applying fast doubling (or matrix exp.) with all operations mod m.
Correct
Both matrix and doubling formulas commute with modular reduction.
Decorating the naive recursive Fibonacci in Python with @lru_cache yields overall O(n) time.
Correct
lru_cache memoizes calls so each F(k) is computed once.
Naively parallelizing the recursive Fibonacci yields linear speedup with the number of processors.
Incorrect
Task overhead and dependencies limit real‐world parallel gains.
F(92) is the largest Fibonacci that fits in a signed 64-bit integer.
Correct
F(92)≈7.54×10¹⁸<2⁶³−1, while F(93) exceeds it.
Negative-index Fibonacci numbers satisfy F(−n)= (−1)^(n+1) ·F(n).
Correct
This extension makes the recurrence F(k)=F(k−1)+F(k−2) valid for all integer k.
The recursive Fibonacci algorithm has O(n) time complexity.
Incorrect
It has exponential time complexity O(2ⁿ) due to redundant calculations of overlapping subproblems.
An iterative solution can compute F(n) using O(1) space.
Correct
By storing only the last two values, no additional memory is needed.
Memoization reduces the recursive Fibonacci algorithm’s time complexity to O(n).
Correct
Memoization eliminates redundant calculations by caching previously computed results.
Tail recursion can optimize the Fibonacci algorithm’s space usage.
Correct
Tail recursion allows compilers to reuse stack frames, reducing space to O(1) in optimized environments.
There exists a Fibonacci algorithm with O(log n) time complexity.
Correct
Matrix exponentiation or fast doubling methods leverage exponentiation by squaring for logarithmic time.
The Fibonacci sequence cannot be extended to negative integers.
Incorrect
Negative indices follow F(-n) = (-1)ⁿ⁺¹ ⋅ F(n), creating a bi-directional sequence.
Calculating F(n) for n ≥ 47 may cause integer overflow in 32-bit systems.
Correct
F(47) = 2,971,215,073 exceeds the 32-bit signed integer maximum (2,147,483,647).
Using Binet’s formula gives exact Fibonacci numbers for all n.
Incorrect
Floating-point precision errors make it unreliable for large n (e.g., n > 75).
The Fibonacci sequence can be calculated in parallel with no shared state.
Incorrect
Each term depends on prior terms, creating unavoidable sequential dependencies.
The iterative Fibonacci algorithm always performs fewer operations than the recursive approach.
Correct
Iterative uses O(n) operations; recursive uses O(2ⁿ) operations due to exponential redundancy.
The naive recursive implementation of Fibonacci (without memoization) has an exponential time complexity of O(2^n).
Incorrect
It has O(1.618^n) complexity as it follows the pattern of the golden ratio, not doubling with each step.
The space complexity of an iterative Fibonacci implementation that only stores the two most recent values is O(1).
Correct
Only two variables are needed regardless of which Fibonacci number you're calculating.
Computing the 100th Fibonacci number is impossible without BigInteger support due to integer overflow.
Correct
The 100th Fibonacci number has 21 digits and exceeds the capacity of standard 64-bit integers.
Using matrix exponentiation, the nth Fibonacci number can be calculated in O(log n) time.
Correct
Matrix exponentiation can be optimized using the \"divide and conquer\" approach for matrix powers.
Memoizing a recursive Fibonacci function improves the time complexity from exponential to O(n).
Correct
Memoization ensures each Fibonacci value is calculated exactly once, resulting in linear time complexity.
The sum of the first n Fibonacci numbers equals F(n+2) - 1.
Correct
This mathematical property allows calculating the sum in constant time if you know the (n+2)th Fibonacci number.
Negative indices in the Fibonacci sequence are meaningless and undefined.
Incorrect
The sequence can be extended to negative indices where F(-n) = (-1)^(n+1) × F(n).
The closed-form solution using the golden ratio (Binet's formula) is the most efficient way to calculate any Fibonacci number.
Incorrect
Floating-point precision issues make it unreliable for large indices despite its O(1) theoretical complexity.
Two consecutive Fibonacci numbers are always coprime (their greatest common divisor is 1).
Correct
This property can be proven using the fact that gcd(F(n), F(n+1)) = gcd(F(n), F(n-1)) = … = gcd(F(1), F(0)) = 1.
A bottom-up dynamic programming approach to Fibonacci requires O(n) space complexity.
Incorrect
We can use the space-optimized version that only keeps track of the last two numbers, requiring O(1) space.
A recursive Fibonacci solution with O(2^n) time complexity can be optimized to O(n) time using memoization.
Correct
Memoization caches computed results, eliminating redundant calculations in the recursive tree.
Calculating Fibonacci numbers iteratively (with two variables) uses less memory than recursion with memoization.
Correct
Iteration uses O(1) auxiliary space, while memoization requires O(n) stack space.
For fib(50), a recursive solution without optimization runs faster than an iterative solution in modern hardware.
Incorrect
The exponential time complexity of naive recursion makes it infeasible for n=50, while iteration runs in linear time.
The matrix exponentiation method can compute fib(n) in O(log n) time.
Correct
Matrix exponentiation leverages logarithmic-time exponentiation by squaring.
Closed-form Binet’s formula (using golden ratio) is practical for exact Fibonacci numbers in programming.
Incorrect
Floating-point precision errors make it unreliable for exact large integers.
Tail recursion optimization for Fibonacci avoids stack overflow errors for large n.
Correct
Tail recursion is transformed into iteration by compilers, preventing stack buildup.
Fibonacci heaps use Fibonacci sequences to achieve O(1) amortized time for insert operations.
Correct
Fibonacci heap structure leverages Fibonacci sequence properties for efficient priority queue operations.
Dynamic programming bottom-up approach solves Fibonacci with less memory than top-down.
Correct
Bottom-up DP uses O(1) space (like iteration), while top-down with memoization uses O(n).
Generating Fibonacci numbers via a generator (lazy evaluation) is memory-efficient for streaming sequences.
Correct
Generators compute on-demand, using constant memory per value.
fib(n) % m (modulus) can be computed in O(log n) time using Pisano period optimization.
Correct
Pisano periods enable cyclical patterns in modular Fibonacci, reducible via modular arithmetic.
A recursive Fibonacci implementation without optimization runs in O(n) time.
Incorrect
It has exponential time complexity (O(2ⁿ)) due to redundant calculations.
Memoization reduces the time complexity of recursive Fibonacci to O(n).
Correct
Memoization stores computed values, eliminating redundant work.
An iterative Fibonacci solution uses more memory than a naive recursive one.
Incorrect
Iterative approaches typically use O(1) space, while recursion uses O(n) stack space.
Binet’s formula provides an exact Fibonacci number for any n using floating-point arithmetic.
Incorrect
Floating-point precision errors occur for large n, causing inaccuracies.
Matrix exponentiation can compute Fibonacci(n) in O(log n) time.
Correct
It leverages exponentiation by squaring for logarithmic time complexity.
Base cases are unnecessary if the Fibonacci function is called with n ≥ 2.
Incorrect
Base cases (n=0, n=1) are required to terminate recursion correctly.
Tail recursion optimization converts a recursive Fibonacci solution to O(n) time.
Incorrect
While it reduces stack usage, time complexity remains O(2ⁿ) without memoization.
Dynamic programming is not useful for optimizing Fibonacci computation.
Incorrect
DP’s core idea—storing subproblems—directly applies here for O(n) time.
The Fibonacci sequence is defined as starting with 1 and 2.
Incorrect
The standard definition starts with 0 and 1 (or 1 and 1, depending on convention).
A basic Fibonacci function will correctly handle negative input values.
Incorrect
Negative indices require special handling (e.g., negafibonacci) or input validation.!