GT Math Proofs

0.0(0)
studied byStudied by 0 people
0.0(0)
full-widthCall Kai
learnLearn
examPractice Test
spaced repetitionSpaced Repetition
heart puzzleMatch
flashcardsFlashcards
GameKnowt Play
Card Sorting

1/50

encourage image

There's no tags or description

Looks like no tags are added yet.

Study Analytics
Name
Mastery
Learn
Test
Matching
Spaced

No study sessions yet.

51 Terms

1
New cards
Explain why “A x = 0 has only the trivial solution” is equivalent to “columns of A are linearly independent.”
A x = 0 means x₁a₁ + … + xₙaₙ = 0 where aᵢ are the columns. If the only solution is x = 0, then the only combo giving 0 uses all coefficients 0, which is exactly linear independence, and vice versa.
2
New cards
Why can a 2×3 matrix A (R³→R²) never be one-to-one?
A 2×3 matrix has rank at most 2 < 3, so by rank–nullity its null space has dimension at least 1. That gives a nonzero solution of A x = 0, so the map is not one-to-one.
3
New cards
Why does rank(A) = n for an m×n matrix imply the columns are linearly independent?
Rank(A) = n means every column is a pivot column and there are no free variables. Then A x = 0 has only x = 0, so the columns are independent.
4
New cards
Why is the solution set of a homogeneous system B x = 0 a subspace?
It contains 0, and if Bx = 0 and By = 0 then B(x + y) = 0 and B(c x) = 0. So it is closed under addition and scalar multiplication and is a subspace.
5
New cards
Why does Aᵏ invertible imply A is invertible?
If Aᵏ has inverse B, then BAᵏ = I and BAᵏ = (BAᵏ⁻¹)A. So A has a left inverse; for square matrices that means A is invertible.
6
New cards
Why does “A has n distinct eigenvalues” imply “A is diagonalizable”?
Eigenvectors for distinct eigenvalues are linearly independent. With n distinct eigenvalues you get n independent eigenvectors, which form a basis, so A is diagonalizable.
7
New cards
Why is a diagonalizable matrix always similar to a diagonal matrix?
If A has a basis of eigenvectors, put them as columns of P and eigenvalues on the diagonal of D. Then A P = P D, so A = P D P⁻¹ and A is similar to D.
8
New cards
Why does a matrix with orthonormal columns preserve lengths, that is, ‖A x‖ = ‖x‖?
Orthonormal columns mean AᵀA = I. Then ‖A x‖² = xᵀAᵀA x = xᵀx = ‖x‖², so lengths are preserved.
9
New cards
Why does (Col A)⊥ = Null(Aᵀ)?
A vector y is orthogonal to every column aⱼ iff aⱼ·y = 0 for all j. In matrix form that is Aᵀy = 0, so y is in Null(Aᵀ).
10
New cards
Why does dim(Col A) + dim(Null(Aᵀ)) = m for an m×n matrix?
Col A is a subspace of Rᵐ and its orthogonal complement is Null(Aᵀ). For any subspace W of Rᵐ, dim(W) + dim(W⊥) = m.
11
New cards
Why is a quadratic form Q(x) = xᵀA x positive definite iff all eigenvalues of symmetric A are positive?
For symmetric A we can write A = P D Pᵀ with P orthogonal and D diagonal of eigenvalues. Then Q(x) = yᵀD y = Σλᵢ yᵢ², which is positive for all nonzero x iff every λᵢ > 0.
12
New cards
Why does a linear map x ↦ A x scale area or volume by det(A) in absolute value?
The determinant gives the volume of the image of the unit cube under A. Any shape can be built from such cubes, so its area or volume is scaled by the same factor.
13
New cards
Why does a regular stochastic matrix always have a unique steady-state probability vector with positive entries?
Regular means some power Pᵏ has all entries positive. Markov chain theory says such a matrix has a unique eigenvector with eigenvalue 1, all positive, and every initial distribution converges to it.
14
New cards
Why can a non-regular stochastic matrix still have a unique steady state?
Regularity is sufficient but not required; if Null(P − I) is one-dimensional, then there is a single eigenvector with eigenvalue 1, which after normalization gives a unique steady-state probability vector.
15
New cards
Why must similar matrices have the same characteristic polynomial?
If B = P⁻¹AP then B − λI = P⁻¹(A − λI)P. Determinants multiply, so det(B − λI) = det(A − λI), giving the same characteristic polynomial.
16
New cards
State the rank–nullity formula for an m×n matrix A.
rank(A) + dim Null(A) = n.
17
New cards
How is rank(A) related to pivots?
rank(A) = number of pivot columns = number of pivot rows.
18
New cards
What is the dimension relation between Col A and Null(Aᵀ) in Rᵐ?
dim(Col A) + dim(Null(Aᵀ)) = m.
19
New cards
What is the relationship between Col A and Null(Aᵀ) as sets?
(Col A)⊥ = Null(Aᵀ).
20
New cards
What is a probability vector?
A vector whose entries are all ≥ 0 and sum to 1.
21
New cards
What is a (column) stochastic matrix?
A matrix with nonnegative entries whose columns are probability vectors.
22
New cards
What is a regular stochastic matrix?
A stochastic matrix P such that for some k, every entry of Pᵏ is strictly positive.
23
New cards
What equation does a steady-state probability vector q for a Markov chain with transition matrix P satisfy?
It satisfies P q = q, or equivalently (P − I) q = 0, with entries summing to 1.
24
New cards
How do you compute the determinant of a triangular matrix?
Multiply the diagonal entries.
25
New cards
How do the three elementary row operations affect det(A)?
Swap two rows: det changes sign. Multiply a row by c: det is multiplied by c. Add a multiple of one row to another: det is unchanged.
26
New cards
What is det(AB) in terms of det(A) and det(B)?
det(AB) = det(A) det(B).
27
New cards
How are determinant and eigenvalues related for an n×n matrix A?
det(A) is the product of the eigenvalues, and trace(A) is the sum of the eigenvalues (counting multiplicity).
28
New cards
What is the characteristic polynomial of A?
p_A(λ) = det(A − λI); its roots are the eigenvalues of A.
29
New cards
What is the diagonalization formula for a diagonalizable matrix A?
A = P D P⁻¹, where columns of P are eigenvectors and D is diagonal of eigenvalues.
30
New cards
What are the eigenvalues of a triangular matrix?
The diagonal entries.
31
New cards
What is an eigenvalue-based condition that guarantees diagonalizability?
If an n×n matrix has n distinct eigenvalues, then it is diagonalizable.
32
New cards
When is a matrix singular in terms of its determinant?
It is singular (not invertible) exactly when det(A) = 0.
33
New cards
What is det(−A) in terms of det(A) for an n×n matrix?
det(−A) = (−1)ⁿ det(A).
34
New cards
How does a matrix A affect area or volume in terms of det(A)?
It multiplies areas or volumes by the absolute value of det(A).
35
New cards
How are rank(Aᵀ), rank(A), and nullity of Aᵀ related?
rank(Aᵀ) = rank(A), and dim Null(Aᵀ) = m − rank(A) for an m×n matrix.
36
New cards
What property of an orthogonal matrix Q ensures norm preservation?
Q has orthonormal columns, so QᵀQ = I and hence ‖Q x‖ = ‖x‖ for all x.
37
New cards
What is the orthogonal complement W⊥ of a subspace W?
W⊥ is the set of all vectors orthogonal to every vector in W.
38
New cards
What is the formula for the orthogonal projection of b onto a line spanned by a unit vector u?
proj_u(b) = (b·u) u.
39
New cards
What is the projection matrix onto Col A when A has full column rank?
P = A (AᵀA)⁻¹ Aᵀ.
40
New cards
What are the normal equations for the least-squares solution of A x = b?
AᵀA x̂ = Aᵀ b.
41
New cards
How is the least-squares solution related to projection?
The least-squares solution x̂ makes A x̂ the orthogonal projection of b onto Col A.
42
New cards
How do you compute a least-squares solution using a QR factorization A = Q R?
Solve R x̂ = Qᵀ b by back substitution.
43
New cards
How are singular values of A related to eigenvalues of AᵀA?
Singular values σᵢ are the square roots of the eigenvalues of AᵀA.
44
New cards
State the singular value decomposition (SVD) of a matrix A.
A = U Σ Vᵀ, with U and V orthogonal and Σ diagonal with nonnegative singular values.
45
New cards
How do singular values give the rank of A?
rank(A) equals the number of nonzero singular values.
46
New cards
What is the condition number of a full-rank square matrix A in terms of singular values?
κ(A) = σ_max / σ_min.
47
New cards
What does the Rayleigh quotient xᵀA x for ‖x‖ = 1 tell you for symmetric A?
It lies between the smallest and largest eigenvalues and equals an extreme value when x is the corresponding eigenvector.
48
New cards
What is a simple positive-definiteness test for a 2×2 symmetric matrix [[a, b], [b, d]]?
It is positive definite iff a > 0 and ad − b² > 0.
49
New cards
What is the quadratic form associated with a symmetric matrix A?
Q(x) = xᵀA x.
50
New cards
What is an LU factorization?
A = L U where L is lower triangular (usually with ones on the diagonal) and U is upper triangular.
51
New cards
What is a QR factorization?
A = Q R where Q has orthonormal columns and R is upper triangular.