1/15
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
newton’s method
iterative method used to find solution where f(x)=0
converges quadratically (fast)
converges quadratically, but if f’(x) = 0 OR the initial guess is way off, it drops down to linear
taylors series
uses derivatives to approximate f(x)
f(x) = f(a) + f’(a)(x-a) + f”(a)[(x-a)² / 2!] ….
fixed point iteration
another version of newton’s method, using g(x)
using g(x)
make sure |g’(x)| < 1
if g’(x) = 0, then it’s quadratic convergence; and vice versa
secant method
used to find the derivative used in newton’s method
use that to replace f’(x) in newtons method
needs 2 initial guesses
converges superlinearly, but if the initial guesses are way off, it drops to linear
easier than newton at finding roots b/c we don’t need to find f’(x)
order of convergence
rate at which the solution to f(x) = 0 is found
if d = 1, its linear (slowest)
if 1 < d < 2, its superlinear
if d = 2, its quadratic
if d = 3, it’s cubic
if d > 3 it’s a higher order (fastest)
bisection method
uses midpoint formula to find solution to f(x) = 0
converges linearly, slowest
useful for functions that change signs, but takes time to find a, b that match requirements
polynomial interpolation
used to find f(x) using points not given
faster with horner’s method
easy to find f’(x) and coefficients, but can’t be used with higher degrees and samples close together
vandermonde matrix
used to find coefficients of P_n (x)
expensive to find P_n (x) and matrix is prone to errors, but good at finding a specific f(x) IF coefficients are known (using horner’s method) and easy to find derivatives
langrange interpolation
better than vandermonde at finding P_n (x)
cheaper to find P_n(x), but expensive to find a specific P_n(x) and not good at finding derivatives
newton interpolation
better at everything
pros: quadratic in finding P_n (x), linear for a specific value of P_n (x) but can be faster using horner’s method, allows incremental interpolation and derivatives
incremental interpolation
easier way of newton’s interpolation
gives the polynomial P_n(x)
divided differences
uses table to find the coefficients needed in the incremental interpolation P_n (x)
Y = (a - b) / (c - d)
chebyshev points
used to check accuracy of P_n (x) to the actual f(x)
minimizes the product of (x - x0)…(x - xn) in the f(x) - P_n (x) equation
x_i = ((a + b) / 2) + ((a - b) / 2) cos((i * pi) / n)
guarantees that as the degree gets higher, f(x) - P_n(x) converges to 0
piecewise polynomials
better for higher degrees
uses s_k (x) polynomials to find s(x) at interval I_k
converges at the highest rate
piecewise cubic polynomial
s_k (x) is always a cubic polynomial (has 4 points)
also converges at the highest order
the error can be made even smaller than piecewise polynomials
cubic spline
forces a curve between points
each s_k (x) is a separate cubic polynomial
have to find 2 more equations to prove that the polynomial is unique
methods
not a knot: for s1, s2 and sn-2, sn-1; at x2 and xn-1 the 3rd derivatives are the same
complete spline: if you know the derivative, use s1’(x1) = f’(x1) and s_n-1’(xn) = f’(xn)
natural cubic spline: use s”(x1) = 0 and s”(xn) = 0; s(x) will reach endpoints looking like a straight line
periodic spline: if f(x) is periodical on [a,b], use s’(x1) = s’(xn) and s’’(x1) = s’’(xn)