1/39
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Suppose that A is an m × n matrix. How do we determine the number of pivotal columns?
Perform row reduction to REF or RREF. Count the leading 1s.
Suppose that A is an m × n matrix. What do the pivotal columns tell us about the solution to the equation A𝐱 = 𝐛?
They indicate how many basic variables there are. If not all columns are pivotal, the system has free variables → infinitely many solutions.
Suppose that A is an m × n matrix. What space is equal to the span of the pivotal columns?
The column space of A, Col(A).
Suppose that A is an m × n matrix. What is the difference between solving A𝐱 = 𝐛 and A𝐱 = 0? How are these solutions related geometrically?
A𝐱 = 𝐛 represents an affine space (general system), while A𝐱 = 0 defines the nullspace (homogeneous system). The solution to A𝐱 = 𝐛 is a particular solution plus the nullspace.
Suppose that A is an m × n matrix. If rank(A) = r, where 0 < r ≤ n, how many columns are pivotal? What is the dimension of the solution space to A𝐱 = 0?
There are r pivotal columns. The nullity is n - r.
Suppose that T_A is a linear transformation T_A: ℝⁿ → ℝᵐ with matrix A. What are the dimensions of A?
A is an m × n matrix.
Suppose that T_A is a linear transformation T_A: ℝⁿ → ℝᵐ with matrix A. If 𝐱 ∈ ℝⁿ, how can we find T_A(𝐱)?
T_A(𝐱) = A𝐱.
Suppose that T_A is a linear transformation T_A: ℝⁿ → ℝᵐ with matrix A. Using the matrix A, how would we know if T_A is one-to-one? Onto?
T_A is one-to-one if Null(A) = {0} (rank = n); onto if Col(A) = ℝᵐ (rank = m).
Suppose that T_A is a linear transformation T_A: ℝⁿ → ℝᵐ with matrix A. How do we find the range of T_A?
The range is the column space of A.
Suppose A is an n × n invertible matrix. What can you say about the columns of A?
They are linearly independent and span ℝⁿ.
Suppose A is an n × n invertible matrix. What is rank(A)? nullity(A)?
rank(A) = n; nullity(A) = 0.
Suppose A is an n × n invertible matrix. What do you know about det(A)?
det(A) ≠ 0.
Suppose A is an n × n invertible matrix. How many solutions are there to the equation A𝐱 = 𝐛?
Exactly one solution for every 𝐛 ∈ ℝⁿ.
Suppose A is an n × n invertible matrix. What is the nullspace of A?
Null(A) = {0}.
Suppose A is an n × n invertible matrix. Do you know anything about the eigenvalues of A?
None of the eigenvalues are zero. Product = det(A); sum = trace(A).
Suppose A is an n × n invertible matrix. Do you know whether or not A is diagonalizable?
Yes, if A has n linearly independent eigenvectors.
Suppose A is an n × n matrix with characteristic equation p(λ) = det(A - λI). What is the degree of p(λ)?
Degree = n.
Suppose A is an n × n matrix with characteristic equation p(λ) = det(A - λI). Counting multiplicities, how many eigenvalues will A have?
A has n eigenvalues.
Suppose A is an n × n matrix with characteristic equation p(λ) = det(A - λI). If p(0) = 0, what do you know about the matrix A?
0 is an eigenvalue → A is not invertible → det(A) = 0.
Suppose A is an n × n matrix with characteristic equation p(λ) = det(A - λI). How will you know if A is diagonalizable?
If A has n linearly independent eigenvectors (i.e., geometric multiplicity = algebraic multiplicity for each eigenvalue).
Suppose A is an n × n matrix with characteristic equation p(λ) = det(A - λI). Suppose p(c) = 0 for some real number c. How do you find the values of 𝐱 for which A𝐱 = c𝐱?
Solve (A - cI)𝐱 = 0 → Null(A - cI).
Suppose A is an n × n matrix with characteristic equation p(λ) = det(A - λI). What is the sum and product of the eigenvalues?
Sum = trace(A); product = det(A).
Suppose A is an n × n matrix with characteristic equation p(λ) = det(A - λI). If A is not triangular or diagonal, do the solutions of p(λ) change when A is reduced to echelon form? Why or why not?
Yes, because row reduction is not a similarity transformation. It changes eigenvalues.
Span
The set of all linear combinations of a set of vectors. It forms the smallest subspace containing those vectors.
Linear Combination
An expression made up of scalar multiples of vectors added together (e.g., a₁v₁ + a₂v₂ + ... + aₙvₙ).
Linearly Independent
A set of vectors is linearly independent if no vector in the set can be written as a linear combination of the others.
Linear Transformation
A function T: ℝⁿ → ℝᵐ that preserves vector addition and scalar multiplication: T(c𝐯 + d𝐰) = cT(𝐯) + dT(𝐰).
Column Space
The span of the columns of a matrix A. It consists of all possible outputs A𝐱.
Nullspace
The set of all vectors 𝐱 such that A𝐱 = 0.
Transpose
The matrix obtained by flipping rows and columns: (Aᵀ)ᵢⱼ = Aⱼᵢ.
Inverse
For a square matrix A, its inverse A⁻¹ satisfies AA⁻¹ = A⁻¹A = I.
Dimension
(# of rows) x (# of columns) Denoted as m x n
Rank
The dimension of the column space of a matrix (i.e., number of linearly independent columns).
Nullity
The dimension of the nullspace of a matrix.
Determinant
A scalar value computed from a square matrix that tells us whether the matrix is invertible and encodes volume scaling.
Eigenvalue
A scalar λ such that A𝐱 = λ𝐱 for some nonzero 𝐱.
Eigenvector
A nonzero vector 𝐱 satisfying A𝐱 = λ𝐱 for some scalar λ.
Eigenspace
The set of all eigenvectors corresponding to a specific eigenvalue, plus the zero vector. It is the nullspace of A - λI.
Diagonalizable
A matrix is diagonalizable if it has enough linearly independent eigenvectors to form a basis (i.e., it can be written as PDP⁻¹).
Orthogonal
A set of vectors is orthogonal if each pair of distinct vectors in the set has a dot product of zero.