1/88
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
vector
A quantity that has magnitude and direction
dimension of a vector
Number of components in a vector.
vector space
A collection of elements that can be formed by adding or multiplying vectors together.
dimension of a matrix
The number of rows and columns of a matrix, written in the form rows×columns.
Vector Addition
adding or combining quantities that have magnitude and direction
Vector Multiplication
not commutative
a x b =/= b x a
Linear Combination
A sum of scalar multiples of vectors. The scalars are called the weights.
span of vectors
Set of all linear combinations of vectors.
dot product
vector multiply vector
results in scalar quantity
= a b cos(angle)
unit vector
v/||v||
norm of a unit vector
sqrt(u*u)
distance between two vectors
d(u,v) = ||u-v||
orthogonal vectors
two vectors whose dot product equals zero
projection of u onto v
(u*v/|v|^2)v
orth v U
U - projection of u onto v
Diagonal Matrix
a square matrix whose entries not on the main diagonal are all zero
identity matrix
a square matrix that, when multiplied by another matrix, equals that same matrix
transpose of a matrix
Switch the rows and columns - imagine it kinda swinging up/down a 90 degree angle
dimension of A*B (matrix multiplication)
the row from A and the column from B
a system of equations can have
one unique solution
no solution
infinite solutions
homogenous system
always has zero vector as a solution
rref
Same as REF but +...
4. The pivot in each non-zero row = 1
5. Each pivot is the only non-zero entry in its column
REF of matrix
1. any row containing entirely zeros are at bottom
2. each leading entry of a row is in a column to the right of the leading entry of the row above it
Inverse Matrix
Matrix that, when multiplied, yields the identity matrix.
A is not invertible if...
1. A is not a square matrix
2. A is a zero matrix
3. A has a zero row or zero column
Determinant of a matrix
ad/bc
Steps to take an inverse of a matrix
use the Gauss-Jordan Elimination method by augmenting your matrix A with the identity matrix [A|I] and performing row operations to transform it into [I|A⁻¹]
finding inverse of a 2x2 matrix
[[a, b], [c, d]]
Determinant: |A| = (ad) - (bc).
Adjoint: Swap a and d, then negate b and c: [[d, -b], [-c, a]].
Inverse: A⁻¹ = (1/|A|) * [[d, -b], [-c, a]]
determinent and row operations: switching two rows
multiples determinent by -1
determinent and row operations: multiplying by a scalar
multiples determinent by the scalar
determinent and row operations: adding a multiple of one row to another
does not change determinant
det(AT)
det(A)
det(A^-1)
Det(A)^-1
determinant of upper/lower triangular matrix or diagonal matrix
the product of all diagonal enteries
cofactor expansion steps:
select any row or column of a matrix, multiply each element in that row/column by its corresponding cofactor, and then sum these products
transformation of a matrix
R^k -> R^n
also called mapping
domain
Set of all possible input values.
codomain
the set of all possible outputs for a function
Range(Ta) or Image(Ta)
set of actual outputs
Range(Ta) for linear systems
vectors for which system Ax=b is consistent
Range(Ta) for linear combinations
If TA is associated to A, then the range(TA) is the span of the columns of A, i.e., the set of all linear combinations of the columns of A.
is b in the range of Ta?
asking if there is a v such that Ta(V) = b
(so augment b to A and then solve)
if there is a solution, it is in range
linear transformation: rotation
R(θ)=[cosθ −sinθ sinθ cosθ]
Clockwise rotation: just flip the sign of θ
reflection across x axis
[1 0 0 −1]
reflection through origin
[-1 0 0 -1]
reflection across y axis
[−1 0 0 1]
reflection across the line y = x
[0 1 1 0]
scaling
[sx 0 0 sy]
sx scales horizontally
sy scales vertically
horizontal shear
[1 k 0 1]
vertical shear
[1 0 k 1]
projection onto x axis
[1 0 0 0]
projection onto y axis
[0 0 0 1]
translation
[1 0 tx 0 1 ty 0 0 1]
txt_x = how far you move a point in the x‑direction (horizontal shift).
tyt_y = how far you move a point in the y‑direction (vertical shift).
properties of linear transformations
Additivity: T(u+v)=T(u)+T(v)T(\mathbf{u} + \mathbf{v}) = T(\mathbf{u}) + T(\mathbf{v}) → The transformation distributes over vector addition.
Homogeneity (scalar multiplication): T(c⋅u)=c⋅T(u)T(c \cdot \mathbf{u}) = c \cdot T(\mathbf{u}) → Scaling before or after the transformation gives the same result.
Preserves the origin: T(0)=0T(\mathbf{0}) = \mathbf{0}. → Linear transformations always map the zero vector to itself.
Matrix representation: Every linear transformation T:Rn→RmT: \mathbb{R}^n \to \mathbb{R}^m can be written as T(x)=AxT(\mathbf{x}) = A\mathbf{x}, where AA is an m×nm \times n matrix.
Composition corresponds to matrix multiplication: If T1(x)=AxT_1(\mathbf{x}) = A\mathbf{x} and T2(x)=BxT_2(\mathbf{x}) = B\mathbf{x}, then (T2∘T1)(x)=B(Ax)=(BA)x(T_2 \circ T_1)(\mathbf{x}) = B(A\mathbf{x}) = (BA)\mathbf{x}.
Invertibility: A linear transformation is invertible if and only if its matrix is invertible (determinant ≠ 0). → Inverse transformation corresponds to the inverse matrix.
Determinant meaning:
∣det(A)∣|\det(A)| = scaling factor of area/volume.
det(A)<0\det(A) < 0 → orientation is flipped (reflection).
det(A)=0\det(A) = 0 → transformation squashes space into lower dimension (not invertible).
kernel(T) or null space of A
vectors mapped to 0 by T
so Av=0
a transformation is onto if
Av = b has at least one solution
one-to-one
Av = b should have at most one solution
vector subspace
collection or set 𝑈 ofvectors in 𝑉, such that• adding two vectors in 𝑈 yields a vector in 𝑈• scaling a vector in 𝑈 yields a vector in 𝑈
basis
linearly independent spanning set
linearly independent
the only solution to
c1v1+c2v2+⋯+cnvn=0
is when all coefficients are zero:
c1=c2=⋯=cn=0
Square matrix → check determinant.
Non‑square matrix → check rank (full column rank = independent).
Always → solve Ax=0 if unsure. (rref should show that each x variable is 0)
dimension of a subspace
number of vectors in that basis
let v be a subspace of R^n and dim(V) = m
basis must have exactly m vectors
spanning set must have at least m vectors
linearly independent set must have at most m vectors
column space
Put the matrix AA into row‑reduced echelon form (RREF).
Identify the pivot columns (the columns that contain leading 1's).
The corresponding columns in the original matrix (not the reduced one!) form a basis for the column space.
Dimension: The number of pivot columns = rank of A
row space
Row reduce AA to RREF.
The non‑zero rows of the RREF form a basis for the row space.
Dimension: The number of non‑zero rows = rank of A
rank(A) =
dim(col(a)) = dim(row(a)) = dim(col(AT)) = rank(AT)
rank-nullity theorem
rank(A) + nullity(A) = number of columns, where rank is number of pivot columns, and nullity is number of free variables
change of basis
Form the change of basis matrix
Put the new basis vectors as columns in a matrix P. Example:
P=[1 1 1 −1]
Convert coordinates from new basis to old basis
If a vector has coordinates [v]B in the new basis, then in the standard basis:
[v]standard=P[v]B[v]
Convert coordinates from old basis to new basis
Use the inverse:
[v]B=P−1[v]standard
Gram-Schmidt Process
Start with a set of linearly independent vectors {v1,v2,...,vn}
Define u1=v1
u2 = v2 - proju1V2
u3 = v3 - proju1V3 - proju2V3
then {u1, u2, ..., un} is an orthogonal basis for v
Least squares
Set up normal equations: ATAx=ATb
Solve for x
Equivalent to projecting b onto the column space of A
inner products
⟨u,v⟩=uTv
properties of inner products
Symmetry: ⟨u,v⟩=⟨v,u⟩
Linearity: ⟨au+bv,w⟩=a⟨u,w⟩+b⟨v,w⟩
Positive definiteness: ⟨v,v⟩≥0, equality only if v=0
Defines length: ∥v∥=sqrt(⟨v,v⟩)
Defines orthogonality: ⟨u,v⟩=0
distance = sqrt(
eigen problems
Solve det(A−λI)=0 for eigenvalues λ
For each λ, solve (A−λI)v=0 for eigenvectors.
eigenvalues of triangular matrix
just the enteries on the diagonal
product of eigenvalues
determinent of A
diagonalisation
Find eigenvalues and eigenvectors.
Form matrix P with eigenvectors as columns.
Then A=PDP−1, where D is diagonal with eigenvalues.
Markov Process
Transition matrix P with nonnegative entries, columns (or rows) sum to 1.
State vector evolves as xk+1=Pxk
Long‑term behavior: look at steady state x such that Px=x
SVD
Factor A into A=UΣVT
U: orthogonal matrix (columns = left singular vectors).
Σ: diagonal with singular values. (sqrt of eigenvalues of ATA)
V: orthogonal matrix (columns = right singular vectors).
QR factorization
Apply Gram-Schmidt to columns of A.
Get orthonormal matrix Q.
Upper triangular matrix R satisfies A=QR
transition matrix P from B2 to B1
P = B1^-1 * B2
Guaranteed eigenvector of A^n
it's just the original eigenvector x
If λ is an eigenvalue of A
λ^k is an eigenvalue of A^k
Suppose A is mxn, B is nxp, and C is pxq
then ABC is mxq
if v and w are parallel, then projvW = ?
w
(AB)^T
B^TA^T
If A*B = 0 and A is invertible
then B is 0
det(2A)
2^n(det(A))
det(A^-1)
1/det(A)
What is the angle between a vector ⃗v and its unit vector ⃗v/ |⃗v |
Zero, since they are in the same direction