UMD MATH240 Exam #3

0.0(0)
studied byStudied by 1 person
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/39

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

40 Terms

1
New cards

Change of Coordinates Matrix

A matrix that transforms coordinate vectors of one basis to coordinate vectors of another basis.

2
New cards

Change of Coordinates Matrix from B to C

Given bases B and C, produce matrix [C1 ... Cn | B1 ... Bn] and take RREF to form [In | P(C←B)]

3
New cards

Properties of P(C<-B)

P(C←B) = P(B←C)⁻¹

P(ε←B) = P(B) = [B1 ... Bn]

P(D←B) = P(D←C)*P(C←B)

4
New cards

Eigenvector

Non-zero vector x such that Ax = λx for some scalar λ.

5
New cards

Eigenvalue

Some scalar λ by which an eigenvector is scaled.

6
New cards

Eigenspace

Nul(A-λ*In)

Otherwise: non trivial solutions of (A-λ*In)x = 0

7
New cards

Linear Independence of Eigenvectors

Eigenvectors with distinct eigenvalues are linearly independent

8
New cards

Eigenvector Limit

An n × n matrix has ≤ n eigenvectors

9
New cards

Eigenvalue Invertibility

A is not invertible if it has λ = 0 as an eigenvalue.

10
New cards

Characteristic Equation

det(A-λ*In) = 0

Solve the above equation to determine the values of λ for A.

11
New cards

Eigenvalues of a Triangular Matrix

Eigenvalues of A are on the diagonal.

12
New cards

Similarity

n × n matrices A and B are similar if there exists an invertible n × n matrix P such that A = PBP⁻¹

13
New cards

Characteristics of Similar Matrices

Similar matrices have the same characteristic polynomial and thus share eigenvalues and their eigenspaces will have the same dimension. They will have different eigenvectors however.

14
New cards

Diagonalization Theorem

An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.

15
New cards

Diagonalization of A

Find an invertible matrix P and a diagonal matrix D similar to A.

P = [v1 ... vn] (eigenvectors of A)

D = eigenvalues of A on the diagonal. See image.

<p>Find an invertible matrix P and a diagonal matrix D similar to A.</p><p>P = [v1 ... vn] (eigenvectors of A)</p><p>D = eigenvalues of A on the diagonal. See image.</p>
16
New cards

Diagonalizability

An n × n matrix with n distinct eigenvalues is diagonalizable.

17
New cards

Diagonalizability with Higher Multiplicity

An n × n matrix with r distinct eigenvalues is diagonalizable if and only if the dimensions of the eigenspaces are equal to the multiplicity of its eigenvalue.

18
New cards

Coordinate Matrix of a Transformation

For basis B, transformation T can be represented by matrix [T]B ("B matrix of T").

19
New cards

Transformation Matrix Similarity

[T]B and [T]C are similar for bases B and C.

20
New cards

Change of Coordinates of a Transformation

For bases B and C

[T]C = P(C←B)*[T]*P(B←C)

21
New cards

Complex Eigenvectors of A

If a matrix has a complex eigenvector, then the conjugate of that eigenvector is also an eigenvector. The conjugate of a vector is simply the vector with the conjugate of all of its terms.

22
New cards

Complex Eigenvalues of A

If a matrix has a complex eigenvalue λ = a + bi then its conjugate λ = a - bi is also an eigenvalue.

23
New cards

Dot Product of Vectors

For vectors U and V, dot(U, V) is defined as U*tranpose(V).

dot(U, V) = u1* v1 + ... + un*vn

24
New cards

Properties of Dot Products

dot(U, V) = dot(V, U)

dot((U + V), W) = dot(U, W) + dot(V, W)

dot(cU, V) = c*dot(U, V)

dot(U, U) ≥ 0

dot(U, U) = 0 if and only if U = 0

25
New cards

Norm (or Length) of Vectors

‖V‖ = √(dot(V, V))

26
New cards

Unit Vector

A vector V such that ‖V‖ = 1

27
New cards

Orthogonality (OG)

dot(U, V) = 0

The state in which two vectors form a right angle geometrically.

28
New cards

Orthogonal Complement

Set of all vectors orthogonal to W, denoted W⊥

29
New cards

Basis of Orthogonal Complement

For an m × n matrix A:

(RowA)⊥ = NulA

(ColA)⊥ = Nul(transpose(A))

30
New cards

Orthogonal Basis

A basis such that all vectors of the basis are orthogonal to each other.

31
New cards

Orthogonal Matrix

A matrix whose columns form an orthonormal basis for ColA.

Otherwise: The columns of A are orthonormal to each other.

32
New cards

Orthonormality (ON)

Unit vectors that are orthogonal to each other. The dot product of orthonormal vectors is 1.

33
New cards

Properties of an Orthogonal Matrix

For Orthogonal Matrix U:

transpose(U)*U = In

inverse(U) = tranpose(U)

34
New cards

Orthogonal Projection

Given two nonzero vectors u and y in R^n, if we write

y = yˆ + z, where

yˆ is a scalar multiple of u and

z is orthogonal to u,

then yˆ is the orthogonal projection of y onto u

and z is called the component of y orthogonal to u.

yˆ = proj_Ly = ((y · u)/(u · u))u

35
New cards

Orthogonal Decomposition

For an orthogonal basis {b1, ... bn} of subspace W of Rn:

The projection of y onto W is (dot(y, b1)/dot(b1, b1))*b1 + ... + (dot(y, bn)/dot(bn, bn))*bn

36
New cards

Best Approximation Theorem

The projection of y onto W is the closest point in W to y. Thus ‖y - y(hat)‖ is a minimized distance.

37
New cards

Standard Matrix of a Projection

For an orthonormal basis U of W:

U*transpose(U) = ProjW

38
New cards

Properties of Projections

ProjW y = y ↔ y is in W

ProjW y = 0 ↔ Y is in W⊥

ProjW has eigenvalues λ= 0, 1 where eigenspace λ=1 is W and λ=0 is W⊥

39
New cards

Gram-Schmidt Process

Process for finding an Orthogonal basis.

Given a basis {x1, ..., xn} for W:

Let v1 = x1

Let v2 = x2 - (dot(x2, v1)/dot(v1, v1))*v1

...

Let vn = xn - (dot(xn, v1)/dot(v1, v1))*v1 - (dot(xn, v_p-1)/dot(v_p-1, v_p-1))*v_p-1

V = {v1, ..., vn} is an OG basis for W. An ON basis can be found by dividing each vector in V by its norm.

40
New cards

QR Factorization

A = QR where:

Q is an m × n matrix with columns that form an ON basis for ColA. (i.e. Q is an orthogonal matrix)

R is an n × n upper triangular matrix with positive diagonal entries

Q is found by performing the Gram-Schmidt process on the columns of A.

R is found using R = transpose(Q) * A