1/39
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Change of Coordinates Matrix
A matrix that transforms coordinate vectors of one basis to coordinate vectors of another basis.
Change of Coordinates Matrix from B to C
Given bases B and C, produce matrix [C1 ... Cn | B1 ... Bn] and take RREF to form [In | P(C←B)]
Properties of P(C<-B)
P(C←B) = P(B←C)⁻¹
P(ε←B) = P(B) = [B1 ... Bn]
P(D←B) = P(D←C)*P(C←B)
Eigenvector
Non-zero vector x such that Ax = λx for some scalar λ.
Eigenvalue
Some scalar λ by which an eigenvector is scaled.
Eigenspace
Nul(A-λ*In)
Otherwise: non trivial solutions of (A-λ*In)x = 0
Linear Independence of Eigenvectors
Eigenvectors with distinct eigenvalues are linearly independent
Eigenvector Limit
An n × n matrix has ≤ n eigenvectors
Eigenvalue Invertibility
A is not invertible if it has λ = 0 as an eigenvalue.
Characteristic Equation
det(A-λ*In) = 0
Solve the above equation to determine the values of λ for A.
Eigenvalues of a Triangular Matrix
Eigenvalues of A are on the diagonal.
Similarity
n × n matrices A and B are similar if there exists an invertible n × n matrix P such that A = PBP⁻¹
Characteristics of Similar Matrices
Similar matrices have the same characteristic polynomial and thus share eigenvalues and their eigenspaces will have the same dimension. They will have different eigenvectors however.
Diagonalization Theorem
An n × n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.
Diagonalization of A
Find an invertible matrix P and a diagonal matrix D similar to A.
P = [v1 ... vn] (eigenvectors of A)
D = eigenvalues of A on the diagonal. See image.
![<p>Find an invertible matrix P and a diagonal matrix D similar to A.</p><p>P = [v1 ... vn] (eigenvectors of A)</p><p>D = eigenvalues of A on the diagonal. See image.</p>](https://knowt-user-attachments.s3.amazonaws.com/f526df5b-a00a-4197-9e50-029078af276b.jpg)
Diagonalizability
An n × n matrix with n distinct eigenvalues is diagonalizable.
Diagonalizability with Higher Multiplicity
An n × n matrix with r distinct eigenvalues is diagonalizable if and only if the dimensions of the eigenspaces are equal to the multiplicity of its eigenvalue.
Coordinate Matrix of a Transformation
For basis B, transformation T can be represented by matrix [T]B ("B matrix of T").
Transformation Matrix Similarity
[T]B and [T]C are similar for bases B and C.
Change of Coordinates of a Transformation
For bases B and C
[T]C = P(C←B)*[T]*P(B←C)
Complex Eigenvectors of A
If a matrix has a complex eigenvector, then the conjugate of that eigenvector is also an eigenvector. The conjugate of a vector is simply the vector with the conjugate of all of its terms.
Complex Eigenvalues of A
If a matrix has a complex eigenvalue λ = a + bi then its conjugate λ = a - bi is also an eigenvalue.
Dot Product of Vectors
For vectors U and V, dot(U, V) is defined as U*tranpose(V).
dot(U, V) = u1* v1 + ... + un*vn
Properties of Dot Products
dot(U, V) = dot(V, U)
dot((U + V), W) = dot(U, W) + dot(V, W)
dot(cU, V) = c*dot(U, V)
dot(U, U) ≥ 0
dot(U, U) = 0 if and only if U = 0
Norm (or Length) of Vectors
‖V‖ = √(dot(V, V))
Unit Vector
A vector V such that ‖V‖ = 1
Orthogonality (OG)
dot(U, V) = 0
The state in which two vectors form a right angle geometrically.
Orthogonal Complement
Set of all vectors orthogonal to W, denoted W⊥
Basis of Orthogonal Complement
For an m × n matrix A:
(RowA)⊥ = NulA
(ColA)⊥ = Nul(transpose(A))
Orthogonal Basis
A basis such that all vectors of the basis are orthogonal to each other.
Orthogonal Matrix
A matrix whose columns form an orthonormal basis for ColA.
Otherwise: The columns of A are orthonormal to each other.
Orthonormality (ON)
Unit vectors that are orthogonal to each other. The dot product of orthonormal vectors is 1.
Properties of an Orthogonal Matrix
For Orthogonal Matrix U:
transpose(U)*U = In
inverse(U) = tranpose(U)
Orthogonal Projection
Given two nonzero vectors u and y in R^n, if we write
y = yˆ + z, where
yˆ is a scalar multiple of u and
z is orthogonal to u,
then yˆ is the orthogonal projection of y onto u
and z is called the component of y orthogonal to u.
yˆ = proj_Ly = ((y · u)/(u · u))u
Orthogonal Decomposition
For an orthogonal basis {b1, ... bn} of subspace W of Rn:
The projection of y onto W is (dot(y, b1)/dot(b1, b1))*b1 + ... + (dot(y, bn)/dot(bn, bn))*bn
Best Approximation Theorem
The projection of y onto W is the closest point in W to y. Thus ‖y - y(hat)‖ is a minimized distance.
Standard Matrix of a Projection
For an orthonormal basis U of W:
U*transpose(U) = ProjW
Properties of Projections
ProjW y = y ↔ y is in W
ProjW y = 0 ↔ Y is in W⊥
ProjW has eigenvalues λ= 0, 1 where eigenspace λ=1 is W and λ=0 is W⊥
Gram-Schmidt Process
Process for finding an Orthogonal basis.
Given a basis {x1, ..., xn} for W:
Let v1 = x1
Let v2 = x2 - (dot(x2, v1)/dot(v1, v1))*v1
...
Let vn = xn - (dot(xn, v1)/dot(v1, v1))*v1 - (dot(xn, v_p-1)/dot(v_p-1, v_p-1))*v_p-1
V = {v1, ..., vn} is an OG basis for W. An ON basis can be found by dividing each vector in V by its norm.
QR Factorization
A = QR where:
Q is an m × n matrix with columns that form an ON basis for ColA. (i.e. Q is an orthogonal matrix)
R is an n × n upper triangular matrix with positive diagonal entries
Q is found by performing the Gram-Schmidt process on the columns of A.
R is found using R = transpose(Q) * A