1/66
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
Coefficient matrix
Square array containing the coefficients of the unknowns in a system of linear equations
Runge phenomenon
Oscillatory behavior that occurs when high-degree polynomial interpolation is applied over equally spaced points
NaN (Not a Number)
Special floating-point value indicating an undefined or unrepresentable result, such as 0/0
Simpson’s Rule
An integration technique that fits quadratic polynomials to subintervals to improve accuracy over the trapezoidal rule
Iterative refinement
A technique that repeatedly improves an approximate solution by correcting residual errors
Polynomial Interpolation
Construction of a polynomial that passes exactly through a given set of data points
Error bound for Taylor polynomial
Expression that limits the remainder term using the next derivative evaluated at a intermediate point
Stability of an algorithm
Property describing how errors are amplified or damped as the algorithm progresses
Precision loss due to scaling
Reduction in significant digits when a value is multiplied or divided by a large factor before computation
Mantissa (significand)
23-bit fraction that, together with the exponent, determines the magnitude of the value
Determinant
Scalar value computed from a square matrix that indicates whether the matrix is invertible
Scaling for numerical stability
Adjusting the magnitude of inputs or intermediate results to avoid loss of significance during subtraction of similar numbers
Error propagation analysis
Method for estimating how uncertainties in inputs affect the uncertainty of the final output
Floating-point representation of zero
All bits zero for +0 and sign bit set with all other bits zero for -0, both considered equal in magnitude
3×3 matrix
Square array with three rows and three columns, often used to represent three simultaneous linear equations
Numerical solution
Approximate result obtained by applying computational algorithms to discretized equations
Natural logarithm Taylor series
Series: ln(1+x) = x − x²/2 + x³/3 −…
Valid for |x| < 1
Bit precision
Number of binary digits used to store a value, directly affecting the smallest distinguishable change
Floating-point rounding modes
Options such as round-to-nearest, round-toward-zero, round-up, and round-down that dictate how results are truncated
IEEE 754
A standard that defines binary floating-point formats, rounding, rules, and special values for computers
Argument reduction
A technique that maps a large input to an equivalent value within a primary interval to improve computational accuracy
Linear system
Set of equations in which each terms is either a constant or a product of a constant and a single variable
Normalization of scientific notation
Adjusting a number so that its leading digit is non-zero, analogous to the implicit leading 1 in binary floating-point
Pivoting in Gaussian elimination
Reordering rows or columns to place larger elements on the diagonal, reducing round-off error
Floating-point overflow handling
A procedure that substitutes infinity or the largest representable value when a computation exceeds the upper limits
LU (lower-upper) decomposition
Factorization of a matrix into a lower-triangular and an upper-triangular matrix to simplify solving linear systems
Cosine Taylor series
Series cos x = 1 − x²/2! + x^4/4! − …
that converges for all x
Subtracting nearly equal numbers
Operation that can cause catastrophic cancellation, dramatically reducing the number of accurate digits
Exponent field
8-bit section that stores a biased exponent to scale the mantissa
Relative error
Ratio of the absolute error to the true value, often expressed as a percentage
Known vector
Column vector representing the constant terms on the right-hand side of a linear system
Rounding error
Difference between the exact mathematical result and its rounded floating-point representation
Newton-Raphson method
Iterative root-finding technique that use4s the function and its derivative to converge quadratically to a zero
Adaptive step size
Strategy that varies the interval size during integration to maintain a desired error tolerance
Binary exponent bias for 32-bit format
Constant value 127 added to the actual exponent to encode it as an unsigned 8-bit field
Machine epsilon
The smallest number that, when added to 1.0, yields a distinguishable result in the given floating-point format
Underflow
Condition where a number is too small to be represented in normalized form, causing it to become zero
Floating-point exponent bias
Constant added to the actual exponent to allow representation of both positive and negative exponents
Denormalized number
Floating-point value with a leading mantissa bit of 0, used to represent numbers closer to zero than the smallest normalized value
Cramer’s Rule
Method for solving a linear system by expressing each variable as a ratio of two determinants
Euler (historical contribution)
Pioneered numerical integration formulas and series expansions for differential equations
Space matrix
Matrix in which most elements are zero, allowing specialized storage and faster operations
Convergence testing
Procedure that checks whether successive refinements of a computation approach a stable result
Analytical solution
Exact expression derived from mathematical theory that gives a closed-form result
Floating-point underflow handling
Procedure that substitutes zero or a denormalized value when a computation falls below the smallest normalized magnitude
Trapezoidal rule
Numerical integration method that approximates the area under a curve by summing areas of trapezoids
Unknown vector
Column vector that holds the variables to be solved in a linear system
Condition number
A metric that quantifies the sensitivity of a system’s solution to small changes in the input data
Small-angle approximation
Simplification where sinθ ≈ θ and tanθ ≈ θ when θ is measured in radians and is near zero
Convergence rate
Speed at which an iterative method approaches the exact solution as the step size or iteration count increases
Jacobian matrix
Matrix of first-order partial derivatives that describes how a vector-valued function changes near a point
Laplace expansion
Recursive formula for determinant calculation that expands along a chosen row or column using minors and cofactors
Discrete Fourier Transform (DFT)
Algorithm that converts a finite sequence of equally spaced samples into frequency components
Lagrange form of interpolation polynomial
Explicit formula that combines basis polynomials weighted by the funciton values at the nodes
Normalized mantissa implicit bit
Assumed leading 1 in the mantissa of a normalized floating-point number, which is not stored explicitly
Arbitrary precision arithmetic
Software technique that allows calculations with a user-defined number of digits beyond hardware limits
Sign bit
Single binary digit that indicates whether the store number is positive (0) or negative (1)
Gauss (historical contribution)
Introduced systematic elimination methods and contributed to the theory of least squares
Absolute error
Magnitude of the difference between an approximate value and the exact mathematical value
32-bit floating-point format
Binary representation using 1 sign bit, 8 exponent bits, and 23 mantissa bits
Unit scaling
Choosing measurement units that keep numerical values within the optimal range of the floating-point format
Normalized number
Floating-point value whose leading mantissa bit is implicitly 1, providing maximum precision
Round-to-nearest even
Tie-breaking rule that rounds a value to the nearest representable number; if exactly halfway, selects the one with an even least-significant bit
Finite difference approximation
Method that estimates derivatives by using function values at discrete points
Taylor series
Infinite sum of derivatives evaluated at a point, used to approximate smooth function locally
Overflow
Condition where a number exceeds the largest representable magnitude, resulting in infinity
Newton (historical contribution)
Developed early numerical techniques such as the Newton-Raphson method for root finding