1/36
The essence of linear algebra.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
What is a vector?
CS: An orderded list of numbers.
Math: arrows in space that happen to have a nice numerical representation.
What are the 2 fundamentel vector operations and explain how they work.
1 vector addition
Adding two vector, which results as in a new vector, is the same as starting from one vector and then moving from the tip of that vector to the second vector.
2 scalar multiplication
squishes, stretches or flips around a vector.
What is the process of stretching, squishing or reversing the direction of a vector called?
Scaling
What is a scalar?
A number that scales a vector.
Describe a linear combination
the process of scaling and adding vectors.
span
the set of all possible vectors you can reach, with a linear combination of a given pair of vectors.
When is a vector linearly independent?
If id adds another dimension to the span.
When is a vector linearly dependent?
If a vector does not add a new dimension to the span, meaning it can be expressed as a linear combination of already included vectors.
What are linear transformations?
a way to move around space such that gridlines remain parallel and evenly spaced, and the origin remains fixed. They are like functions in the sense that for each input they spit out an output.
What is a matrix?
A representation of a linear transformation.
What is a composition?
The product of two distinct linear transformations (matrices).
Does order matter for a composition?
Yes! A x B is generally not equal to B x A. This is because matrix multiplication involves taking the dotproduct of rows of the first matrix with columns of the second, and swapping the matrices changes which one provides the rows and which provides the columns.
What is the determinant?
The factor by which a linear transformation scales any area in the space.
Can the determinant be negative and what does this mean?
Yes the determinant can be negative and it means that the orientation of space is inverted?
What happens when the determinant is zero?
When the determinant is zero, the accompanied linear transformation squishes all of space into a lower dimension. This means it is impossible to determine how space is scaled resulting in a determinant of zero.
Explain how we solve the system of linear combinations of Ax = v by explaining what the inverse of a matrix does.
For Ax = v, x is unknown but equal to v for some linear transformation represented by A. That means that if we apply the linear transformation A to x again but in reverse, we get back to were we started. So we solve for AxA*-1 = vA*-1 Because multiplying a matrix by its inverse results in the identity matrix I, we get x = vA*-1.
What are eigenvectors?
Vectors that stay on their span after a linear transformation and only get stretched or squished by some scalar.
What are eigenvalues?
Scalars that determine how much the corresponding eigenvector gets stretched or squished during the corresponding linear transformation.
What is a diagonal matrix?
A matrix that has zero’s everywhere except its diagonal. And the way to interpret this is that all the basisvectors are eigenvectors with the diagonal entries of this matrix being their eigenvalues.
Does every matrix have a determinant? And an inverse?
No only square matrices have determinants and inverses. however, not every square matrix has an inverse, for that its determinant must not be equal to zero.
Diagonalization
A square matrix A of order n is called diagonalizable if there is an invertible matrix P
such that P−1AP is a diagonal matrix. If A is diagonalizable, then the columns of P
are eigenvectors of A and the diagonal entries of P−1AP are the eigenvalues of A.
this is where you come up with a basis consisting of eigenvectors such that your linear transformation is a diagonal matrix with respect to that basis.
pca reddit
Maybe this is less practical but still an application of eigenvalues/vectors. If you have a data cloud ie multivariate data, you can compute a covariance matrix of the data.
That covariance matrix is positive definite so it will only have positive eigenvalues. If it’s full rank (which PD matrices are), you will have as many eigenvectors as your rank.
Since your eigenvectors are all orthogonal to each other, and you now have a new basis for your data cloud. Namely the one in which each basis vector is rotated in a way to maximise the variance along that basis vector. The variance along that one basis vector is your eigenvalue associated with that that particular basis eigenvector.
Describe the transpose of a matrix
the transpose of matrix flips its rows and columns so that the row 1 in matrix A becomes column 1 in matrix A’.
What is a square matrix
a matrix with equal number of rows and columns
The trace
the sum of teh diagonal entries.
do all matrices have a trace?
No, only square matrices have a main diagonal and therefore a trace.
what is a special property of a symmetric matrix?
It is equal to its own transpose.
Does order matter for matrix addition?
NO
Does order matter for matrix multiplication? Why?
Yes, because you multiply the rows of the first matrix by the columns of the second. (which need to be equal for it to work)
What does it mean for the trace of a matrix to be invariant under cyclic permutations?
Means that the trace of a matrix product stays the same as long as the internal order doesnt change.
TR(AB) = TR(BA). However, if it is not cyclic but a different internal order it is no longer equal.
TR(ABC) not equal to TR(CBA)
Are all square matrices invertible?
No, only square matrices with a non-zero determinant are invertible. Input cannot be mapped to a higher dimension.
Orthogonal matrices
square matrices where the inverse is equal to their transpose, meaning they preserve lengths and angles when used for a transformation. They are useful for representing isometric transformations like rotations and reflections. The determinant is either 1 or minus 1. and are symmetric.
What does Av=λv mean?
It means that the matrix-vector product gives the same result of just scaling the eigenvector v by some (eigen)value lambda.
How do we find the values of Av=λv?
rewrite right hand side to be matrix-vector multiplication as well.
With both sides looking like matrix vector multiplication we can subtract off the rhs and factor out the v.
Now we ask ourself when this new matrix product times v gives us the zero factor?
For that we need the determinant of the matrix = 0
When is a matrix positive semidefinite
A matrix is positive semidefinite if and only if all of its eigenvalues are larger then zero.
non-negative (i.e., λi ≥ 0) for all i
explain orthogonal diagonalization