1/30
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Numerical Analysis
The analysis and development of methods
used to approximate solutions to math problems.
What ways are theory intertwined with numerical analysis
theories guarantee the methods actually work
What ways are application intertwined with numerical analysis
applications motivate development of new theories
test and validate theories
Is a python package producing a reasonable result?
compare the actual result vs. package
code it yourself
Algorithmic error
error arising from using approx. method instead of exact
Floating-Point error
“round-off error”, computer rounding
Truncation error
using finite number of terms to approx. infinite process
Bisection Method
split [a,b] in half and choose interval where root is, repeat
Advantages of Bisection method
always converges, easy implementation
Disadvantages of Bisection method
slow
doesn’t work for even multiplicity roots
unlikely to hit the exact solution
linear order
Fixed-Point Iteration
rewrite f(x)=0 as g(x)=x, then iterate x_n+1 = g(x_n)
p_n+1=F(p_n)
Advantages of FP Iteration
only one function evaluation is required
if second derivative is continuous - quad order
Disadvantages of FP Iteration
requires F’(x)<1 for unique solution
Newton’s Method
using tangent line at current approx. to find next approx.
p_n+1=p_n-F(p_n)/F’(p_n)
Advantages of Newtons method
very fast, quadratic convergence
Disadvantages of Newtons method
often requires a good initial guess
two function evaluations at each iteration
Secant Method
p_n+1=p_n - F(p_n)(p_n-p_n-1)/(F(p_n)-F(p_n-1))
Advantages of Secant method
fast
1+sqrt(5)/2 order
1 function evaluation per iteration
Disadvantages of Secant method
need two good first guesses to start
what is the purpose of interpolation?
estimate values between known data points
build useful approximations from limited data
Pros of Global Method
Lagrange
very accurate for smooth functions
one unified formula
Cons of Global Method
Lagrange
high-degree polynomials can oscillate badly
sensitive to small changes in data
not good for discontinuous behavior
Pros of Local Method
Piecewise
stable
efficient for large datasets
good for irregular behavior
Cons of Local Method
Piecewise
not one unified formula
derivatives may not be smooth in simpler methods
requires keeping track of intervals
Why do truncation and round-off errors affect the derivative approx.?
using finite difference formulas requires subtracting nearly equal numbers and approximating a limit with a finite step size, which together create numerical inaccuracies depending on how small or large the step size h is.
Richardson Extrapolation
improves the accuracy of an approximation by combining two estimates of the same quantity—each computed with different step sizes—so that the leading error term cancels out
Advantages of Richardson Extrapolation
increases accuracy
reduces truncation error
helps choose a good step size
Disadvantages of Richardson Extrapolation
sensitive to round-off error
assumes smooth error behavior
requires extra computations
not ideal for extremely small h
Gradient Descent
find the minimum of a function by iteratively moving in the direction of the steepest decrease, which is opposite to the gradient, until the function reaches its lowest value or converges to an acceptable approximation.
Challenges of gradient descent for cost function with large variables
lots of data (takes time)
very expensive
getting stuck in local minima or saddle points
Ways to mitigate GD
pre-coding analysis of best initial learning rate
use random sample of data to “train”