1/52
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
Q: What does it mean for a set of vectors to span ℝⁿ?
A: Their linear combinations can produce every vector in ℝⁿ.
Q: What is the minimum number of vectors required to span ℝⁿ?
A: n linearly independent vectors.
Q: Can 2 vectors span ℝ⁴?
A: ❌ No. The span of 2 vectors is at most 2-dimensional.
Q: What is a basis for a vector space?
A: A linearly independent set that spans the space.
Q: What is the dimension of a vector space?
A: The number of vectors in any basis for that space.
Q: If a set spans ℝⁿ, what can we always find inside it?
A: A subset that is a basis for ℝⁿ (remove dependent vectors).
Q: What is true about any spanning set in a finite-dimensional space?
A: It contains a basis as a subset.
Q: If a set has more than n vectors in ℝⁿ, can it be independent?
A: ❌ No — any set with more than n vectors in ℝⁿ is linearly dependent.
Q: If a set has fewer than n vectors in ℝⁿ, can it span?
A: ❌ No — you need at least n to span ℝⁿ.
Q: Can you always add more vectors to an independent set in ℝ⁷?
A: ✅ Yes, until you reach 7 total vectors.
Q: What are the two conditions for linearity of T(u) = …?
A: 1. T(u+v) = T(u) + T(v) 2. T(cu) = c T(u)
Q: What must hold for a linear transformation regarding the zero vector?
A: T(0) = 0.
Q: Is T(x, y) = (x+1, y+2) linear?
A: ❌ No, because T(0,0) ≠ (0,0).
Q: Is T(x, y) = (2x, 3y) linear?
A: ✅ Yes — it satisfies both linearity conditions.
Q: If S and T are linear, is T∘S linear?
A: ✅ Yes, the composition of linear maps is linear.
Q: If S and T are both nonlinear, is T∘S necessarily nonlinear?
A: ❌ Not necessarily — nonlinear compositions can coincidentally be linear.
Q: How can you test whether a transformation is linear?
A: Plug in vectors and check both linearity conditions directly.
Q: What is the standard matrix of a linear transformation T: ℝⁿ→ℝᵐ?
A: The matrix whose columns are T(e₁), T(e₂), …, T(eₙ).
Q: What does the image (or range) of T mean?
A: The set of all possible outputs T(x) — equivalent to the column space of A.
Q: What is the null space of a matrix A?
A: All x such that A x = 0.
Q: State the Rank–Nullity Theorem.
A: rank(A) + nullity(A) = number of columns of A.
Q: For A (5×3), what is the largest possible rank?
A: 3 (limited by the smaller dimension).
Q: For a linear map ℝ⁵→ℝ³, what’s the smallest possible nullity?
A: rank ≤ 3 → nullity ≥ 2.
Q: If null(A) = {0}, what does that mean?
A: The transformation is one-to-one (injective), and A has full column rank.
Q: What can you say about independence preservation if null(A) = {0}?
A: Independent vectors remain independent under A.
Q: Can a matrix with null(A) = {0} send an independent set to a dependent one?
A: ❌ No — that would contradict injectivity.
Q: What does a pivot in a column represent?
A: That column is linearly independent of the previous ones.
Q: What does “every column has a pivot” mean?
A: The transformation is one-to-one (null space = {0}).
Q: What does “every row has a pivot” mean?
A: The transformation is onto (its image covers all of ℝᵐ).
Q: If T:ℝ³→ℝ³ has image ℝ³, what’s true about A?
A: A is 3×3 and full rank (3). Not every row must contain a pivot — just 3 pivots in total.
Q: If the image of T is ℝ³, what can we say about its null space?
A: It must be {0} (since rank = 3 and 3+nullity = 3).
Q: How do rank and nullity reflect onto/one-to-one?
A: - Onto ↔ rank = # rows - One-to-one ↔ rank = # columns
Q: If T: ℝ⁵→ℝ² and S: ℝ²→ℝ⁵, what is T∘S?
A: A map ℝ²→ℝ² (outputs of S go into T).
Q: What happens if you reverse them (S∘T)?
A: Then S∘T: ℝ⁵→ℝ⁵.
Q: Can T∘S: ℝ⁵→ℝ⁵ be onto?
A: ❌ No — rank ≤ rank(T) ≤ 2, so image ≤ 2-dimensional.
Q: What is the matrix dimension of T∘S if T: ℝ²→ℝ⁵ and S: ℝ⁵→ℝ²?
A: (5×5) matrix (rows = output dim of T, columns = input dim of S).
Q: What does it mean for matrices A and B to be similar?
A: There exists an invertible P such that B = P⁻¹ A P.
Q: Do similar matrices have the same determinant?
A: ✅ Yes.
Q: What other properties do similar matrices share?
A: Same rank, trace, eigenvalues, and determinant.
Q: When is a matrix nonsingular?
A: When it’s invertible (det ≠ 0, full rank, null space = {0}).
Q: If A is invertible, what is the rank of A?
A: Rank = n (full rank).
Q: How is the length (norm) of vector v computed?
A: ||v|| = √(v·v).
Q: What is the cosine of the angle between u and v?
A: cos θ = (u·v)/(‖u‖‖v‖).
Q: When are two vectors orthogonal?
A: When their dot product = 0.
Q: If vectors are orthogonal, what’s true about their independence?
A: Orthogonal nonzero vectors are automatically linearly independent.
Q: Every linear transformation preserves the origin.
A: ✅ True.
Q: Every transformation that preserves the origin is linear.
A: ❌ False — it might still violate additivity or scaling.
Q: If A is 3×5, then rank(A) + nullity(A) = 3.
A: ❌ False — it equals the number of columns, 5.
Q: The image of A equals ℝ³ iff rank(A)=3.
A: ✅ True.
Q: Two matrices with the same determinant must be similar.
A: ❌ False — determinant alone doesn’t imply similarity.
Q: If A x = 0 has only the trivial solution, columns of A are independent.
A: ✅ True.
Q: If a set of vectors is linearly dependent, at least one is a linear combination of the others.
A: ✅ True.
Q: What is the study strategy summary?
A: - Step 1: Read flashcards until definitions feel instinctive. - Step 2: For every “False” card, write your own counterexample matrix or transformation. - Step 3: Re-derive the logic of theorems from memory. - Step 4: Do 5–10 short problems mixing rank/nullity, independence, and linearity checks. - Step 5: End with the True/False section again to verify mastery.