1/45
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
How do you know if a square matrix is diagonalizable
guaranteed if it has n linearly independant eigenvectors. this is the case if (one, not all)
it has n unique eigenvalues OR
each of its eigenvalues’ algebraic multiplicity is equal to their geometric multiplicy OR
it is symmetrical
a combo of these is met
geometric multiplicity
the dimension of the eigenspace corresponding to an eigenvalue, representing the number of linearly independent eigenvectors associated with that eigenvalue.
algebraic multiplicy
the number of times an eigenvalue appears as a root of the characteristic polynomial of a matrix.
geometric and algebraic multiplicty relationship
amu must be greater or equal to gemu. if they are equal, the eigenvalue is said to be diagonalizable. If they are equal for every eigenvalue, the matrix is diagonalizable
det of a rotation
ALWAYS 1
det of a reflection
ALWAYS -1
definition and determinant of a shear
vertical shear by a factor of k: (x,y) goes to (x, y+kx). determinant ALWAYS 1
Projection determinant
1 if it is the identity projection (projects onto the entire subspace)
0 if not (if it collapses its input onto something with a smaller dimension)
reflection onto L eigenvectors and eigenvalues
values: 1 and -1
vectors: parallel to L for 1, perpendicular to L for -1
projection onto L eigenvectors and eigenvalues
values: 0 and 1
vectors: perpendicular to L for 0, parallel to L for 1
Rotation by angle x eigenvectors and eigenvalues
IF x 0<x<180 (NOT inclusive): NO eigenvalues or eigenvectors
IF x=180; value -1, every vector is an eigenvector (everything flips orientation)
IF x=0, value 1, every vector is an eigenvector (just stays the same)
shear eigenvector and eigenvalue
value: 1 only
vector: the direction in which nothing moves (for a horizontal shear, eigenvector (1,0)
symmetric matrix
equal to its transpose. always diagonizable,
injective
each input has a unique output. number of columns equals rank
surjective
range spans entire codomain. number of rows equals rank
How can you get to [T]b (transformation matrix T in basis B) from [T]e
If e is not the standard:
[T]b = S(b to e) * [T]e * S(e to b)
If e is the standard matrix:
[T]b = B [T]e B^(-1)
Note:
transformation from standard to B is B inverse.
Transformation from B to standard is B
what does it mean to have a transformation matrix T in basis B?
Each column of [T]b tells us what linear combination of the vectors of b you need to get to each of the transformed vectors. For example the first column of [T]b is the combination of b1, b2, b3, and b4 that you need to get to T(b1)
What is the change of basis matrix S from basis A to basis B? (2 ways to say it, say both)
S=B^(-1)A
To get S, we write each vector of A in the coordinates of the new basis, B.
What linear combination of the vectors of B do you need to get to each of the vectors in A?
is it invertible?
yes if the determinant is nonzero
What is the least squares solution to Ax=b (what does that mean)
least squares solution: when there is no real solution, we try to get as close as we possibly can to a real solution
We have a vector v that is NOT in the range (column space) of a matrix A and we are trying to solve Ax=v. Since v isn’t in the column space, there is no solution. What do we do?
Least squares solution: if i am looking for a solution x to Ax=v but v isnt in the column space of A, then i need to find another vector u that IS in the column space of A but that is as close to v as possible. to do this, i orthogonally project v onto the column space of A and the resulting vector is my new vector, u.
Use the attached formula but switch it around so that you isolate x (A is the transformation. Ax=u, the vector that is super close to v but not quite. v is still v, the vector we cant quite get to. To Isolate for x, we do x=(Atranspose A)^(-1) (Atranspose * v)

Least squares shorter version: we are looking for a solution to Ax=v. v isnt in the column space.
find the orthogonal projection of v onto the column space of A. that gives you u. A inverse that if you want x.
that long formula with the transposes and inverses is the formula for the projection onto a non-orthogonal matrix. how do you do the projection of a vector onto an orthogonal matrix?
project your vector onto each vector of the matrix and add them all up
What is Q, what is R in QR decomposition
A=QR
Q - an orthonormal matrix that spans A (orthonormal basis for A)
R - an upper right triangle matrix (so that if you want to get back to A from Q you can)
You have an orthogonal matrix. What is its inverse equal to
its transpose
Steps for QR factorization
do gram schmidt on A to get an orthonormal basis with the same span, Q
R=Q^(T)*A (R is equal to Q transpose times A, because Q inverse is the same as Q transpose, because Q is orthogonal)
thats all
How does QR help with least squares problems
Attached image

what does gram schmidt give you
an orthonormal basis that spans the same subspace
gram schmidt steps
let q be the new orthonormal basis and let be the old non orthonormal basis
q1=a1
q2=a2 - projection of a2 onto q1
q3=a3 - projection of a3 onto q1 - projection of a3 onto q2
dimension of a subspace plus the dimension of its orthogonal complement?
n (this tracks because any basis plus a subspace plus any basis for its orthogonal complement gives you a basis for Rn)
what is the case for any diagonalizable matrix if you do a specific change of basis?
any diagonalizable matrix can be diagonal in some basis
How do you diagonalize a matrix
Write it as PDP^-1
find all its eigenvalues by setting det(A-lambda I)=0 and solving for all the lambdas
find all its eigenvectors by solving (A-lambda I)(x)=0 for each lambda
P is a matrix of eigenvectors. D is a diagonal matrix of eigenvalues in the same order
Requirements for V to be a subspace of S
V contains the zero vector
V is closed in vector addition and scalar multiplication
rank nullity for T: V to W
dim(im)+dim(ker)= dim V
OR
rank(T) + nullity(T) = dim V
det (CB)
detC*decB
det(C^-1)
1/detC
writing a vector as a linear combination of two others that are orthogonal
add up the projections of your vector onto each of the orthogonal vectors
writing a vector as a linear combination of two others that are not orthogonal
Sometimes not possible.
If writing v as a combo of u and w:
make a matrix with columns [u w | v]
rref it
attached image
![<p>Sometimes not possible.</p><p>If writing v as a combo of u and w:</p><ol><li><p>make a matrix with columns [u w | v]</p></li><li><p>rref it</p></li><li><p>attached image</p></li></ol><p></p>](https://knowt-user-attachments.s3.amazonaws.com/fb60d856-cc17-433b-8733-4db9f73b3f0d.png)
S=AB. What is S inverse?
B^-1 A^-1 (flip the order!)
what does det=0 tell us (3 things)
matrix is not invertible
it collapses its input into a lower dimension
it either has no solutions or infinite solutions
A subspace is defined by a given set of equations. How would you make a basis for that subspace?
put your equations into a matrix (one column per variable)
RREF the matrix
identify all your dependant variables and assign them a new name
write all your variables in terms of those dependant variables (write all your pivots in terms of the parameters)
make a new matrix where each column is one of your parameters (dependant variables). That’s your basis!
how do you find the orthogonal complement for a matrix W
Find a basis for W, B
write a new matrix A. Each row of A should be a column of B
solve Ax=0 for x (find the kernel of A)
the solution forms a basis for W perp
Change of basis matrix from C to B
B[x]b=C[x]c
[x]b=B^(-1)C [x]c
Relationship between kernel and image and injectivity/surjectivity
if dim ker = 0, then it is injective
if image=codomain, it is surjective
how do you find [x]b (vector x in basis B)?
what linear combination of the vectors of basis b gets you to x?
sometimes you can figure it out by inspection (you can see that 2b1+3b2 gives you x, so [x]b is <2,3>
if B is orthogonal, you can just project x onto each of the vectors of B
OTHERWISE: set up a linear system where each of b1, b2, b3, etc are columns of an augmented matrix and x is the augmented column. you are trying to solve for b1, b2, b3 coefficients that will get you to x.
angle between 2 vectors formula
