Final Exam Study

0.0(0)
studied byStudied by 0 people
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
Card Sorting

1/72

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

73 Terms

1
New cards

Rank of Matrix

number of linearly independent rows or columns of a matrix

2
New cards
<p>Rank</p>

Rank

2

3
New cards
<p>Rank</p>

Rank

3

4
New cards
<p>Representing systems of equations as a matrix, what is the augmented matrix</p>

Representing systems of equations as a matrix, what is the augmented matrix

knowt flashcard image
5
New cards

properties of matrix

associative, distributive, tranpose, NOT commutive

6
New cards

Soutions to systems of equations are where they

intersect

7
New cards

Linear systems can be expressed in matrix form as

[A][x]=[b]

8
New cards

Guassion Elimination solves for

x= inv([A])*[b]

9
New cards

Forward elimination

eliminates variables

10
New cards

Back substitution

solves for variables

11
New cards

Interlopation

estimating intermediate values between data points

12
New cards

Polynamila interpolation

find the polynomial that exactly connects n data points

13
New cards

B spline interpolation

find a piece-wise sum of n basis functions that exactly connects n data points

14
New cards

Linear interpolation a1formula

a1+(a2)x

15
New cards

Quadratic Interpolation formulation

a1+(a2)x+(a3)x²

16
New cards

Higher order polynomial interpolation

a1+(a2)x+..+an(x^n-1)

17
New cards

extrapolation

making a predication not justified by the data

18
New cards

interpolation error decreases

as step size decreases

19
New cards

higher order polynomial interpolation

can lead to high errors

20
New cards
21
New cards

b-spline interpolation approximates

data using sum of simple basis functions

22
New cards

extrapolation can cause

large errors

23
New cards

optimization finds

max or min of f(x), roots of f’(x)

24
New cards

with optimization, there is no gaurantee

to find global min or max

25
New cards

brute force optimzation

tries all values to find max/min

26
New cards

disadvantages of brute force

can miss max/min of points and has huge computational cost

27
New cards

steepest ascent formula

xi+1 = xi+hf’(x)

28
New cards

as x approaches max,

steps get smaller

29
New cards

if h is too small, it will

converge slowly

30
New cards

if h is too big,

it may overshoot maximum

31
New cards

newtons method optimization finds (requires f’’ and may diverge)

local max and min and locations where f’(x)=0

32
New cards

main features of steepest ascent method

max, only needs 1 starting point, must be able to take f’, converges slowly, requires h and may not converge depending on h value

33
New cards

main features of newtons method

max or min, only needs 1 starting point, can diverge, must be able to take f’ and f’’, converges fastest

34
New cards

newtons method vs. steepest ascent

converges faster when near maximum but can diverge

35
New cards

goal of linear regression

to find the line that best fits data

36
New cards

define error or residual formula

ei = yi - fi

37
New cards

coefficient of determination, r²

quantifies % of data explained by the regression line

38
New cards

what does r²=1 suggest about line

100% of data was captured

39
New cards

regression

fitting models to data, usually by minimizing the sum of squared errors

40
New cards

linear regression

has unique solution, coefficients can be calculated algebraically

41
New cards

ways to quantify regression error

SSE and coefficient of determination

42
New cards

linear regression vs interpolation

both involve finding coefficients that best describe the data

43
New cards

least squares approzimation

finds a line that gets as close as possible to the points

44
New cards

matrix in least squares approximation

not square, making exact solution generally not possible

45
New cards

interpolation goes exactly

through all given points

46
New cards

as long as the equations are linear with respect to parameters, linear least-squares can be

reformulated as matrices and be extended to more complex examples wih indp. var or nonlinearities

47
New cards

unlike linear least squares, non linear regression does not guarantee that

a global minimum will be found

48
New cards

brute force non linear approach

manually or programmatically tries all combinaions of parameters

49
New cards

advantages to nonlinear regression

simple to implement, even converge of paramter range

50
New cards

disadvantages to nonlinear regression

computationally costly, on the order of values

51
New cards

goal of nonlinear regression

identify a set of model parameters that is most consistent with experimental data

52
New cards

optimization algorithms are

designed to efficiently search through the possible parameter combinations id identify “best fit” parameter set that minimizes SSE

53
New cards

gradient SSE in nonlinear regression

indicates how much and in which direction the sum of squares error changes as parameter values change

54
New cards

gradient SSE in steepest descent, non linear regression

used to compute direction of steepest slope and parameters are adjusted with step size proportional to the magnitude of it

55
New cards

gradient-based optimization is based on

Levenberg-marquardt algorithm

56
New cards

Levenberg-marquardt algorithm

combines initial steepest asvent algorithm followed by Gauss-newton algorithm to converge to final solution

57
New cards

nonlinear regressions can

preform best fits even when model is nonlinear with respect to parameters

58
New cards

nonlinear regression cannot

guarantee a global minimum, may converge to local minima

59
New cards

optimization methods such as steepest descent or Newton’s method can make

optimization more efficient than brute force

60
New cards

Runge Kutta method

symbol providEues an estimate of the slope of y(x) over the interval from xi to xi+1

61
New cards

Euler’s method local truncation error

measures the error introduced in one step of method assuming that the previous step was exact. As Euler’s method can be obtained through Taylor’s series

62
New cards

Euler’s method local truncation error

O(h²)

63
New cards

Euler’s method global truncation error

is the error at a fixed time, ti, after however many steps the ethod needs to take to reach that time from the initial time O(h)

64
New cards

Euler’s method overview

approximate solution using slope at current size

65
New cards

Euler’s method error depends on

step size

66
New cards

Higher Order Runge Kutta

predict an initial slope k1 and then gradually refine/correct it with other slopes

67
New cards

Midpoint single step error

O(h3)

68
New cards

Midpoint cumultive error

O(h²)

69
New cards

4th Order Runge Kutta Method single step error

O (h^5)

70
New cards

4th Order Runge Kutta Method cumulative error

O(h^4)

71
New cards

Increasing the order of RK increases

the accuracy

72
New cards

Error for RK methods depends on

step size, h

73
New cards

Lorenz Equation - Butterfly Effect

small change in one state of deterministic nonlinear system can result in large differences in later state