1/65
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced | Call with Kai |
|---|
No analytics yet
Send a link to your students to track their progress
What does the determinant represent?
It measures how a linear transformation scales volume (area in 2D, volume in 3D)
What does det(A) = 0 mean “squashing”?
Because volume becomes zero, meaning the transformation collapses space into a lower dimension.
Why does det(A) doesn’t equal 0 imply invertibility?
Because no information is lost, so the transformation can be reversed
Why does scaling a row scale the determinant?
Because it scales the volume in that direction
Why does scaling a row scale that determinant?
Because it scales the volume in that direction
Why does swapping rows change sign?
It reverses orientation of space
Why does det(A) = 0 imply no or infinite solutions?
Because columns are dependent, so the system cannot have a unique solution.
What does independence mean conceptually?
No vector can be written using the others → no redundancy
What does independence mean conceptually?
No vector can be written using the others → no redundancy
What are the two conditions for a basis?
Linearly independent + spans the space
Why does a basis give unique representation?
Independence prevents multiple combinations; spanning ensures existence
Why do all bases have the same size?
Because dimension is an intrinsic property of the space
What does [x]B mean?
The coefficients need to build x from basis B
Why do coordinates depend on basis?
Because you’re measuring the vector relative to different directions
What does changing coordinates do?
It changes how you describe the same vector, not the vector itself
WHy is solving for coordinates a linear system?
Because you’re expressing a linear combination of basis vectors
Why is Nul(A) in R^n?
Because inputs (x) live in R^n
Why is Col(A) in R^m?
Because outputs (Ax) live in R^m
What is dimension?
Number of independent directions in a space.
Why can’t you have more than n independent vectors in R^n?
Because space only has n degrees of freedom
Why does more vectors → dependence?
Because you exceed the number of independent directions.
Why is rank = dim(ColA)?
Because pivot columns form a basis for the column space.
Why is nullity “degrees of freedom”?
It counts how many variables can vary freely.
Why rank + nullity = n?
Each variable is either constrained (pivot) or free.
What does a change-of-basis matrix do?
Converts coordinates from one basis to another
Why is it invertible?
Because both bases are independent and span the space
Why does multiplication convert coordinates?
Because it re-expresses the same vector in a new basis
What is an eigenvector?
A direction that only scales under transformation
Why are they “special”?
Because they reveal the natural directions of the transformation
Why must they scale, not rotate?
Because rotation would change direction, violating the definition
Why is eigenspace = Nul (A - lambda l)?
Because eigenvectors satisfy (A - lambda I)x = 0
Why solve det (A - lambda I) = 0?
To find when the matrix becomes non-invertible
Why does that give eigenvalues?
Because non-invertibility allows nonzero solutions to exist
What is algebraic multiplicity?
How many times an eigenvalue appears as a root
Why doesn’t multiplicity = number of eigenvectors?
Because eigenvectors depend on null space dimension.
What does diagonalization do?
Simplifies a matrix into independent scaling directions.
Why is diagonalization useful?
Powers of matrices become easy.
When is a matrix diagonalizable?
When it has n independent eigenvectors.
Why must eigenspaces sum to n?
To form a full basis.
Why can repeated eigenvalues fail?
Because they might not produce enough independent eigenvectors.
What does diagonalization mean geometrically?
The transformation scales along independent axes.
Why do complex eigenvalues appear?
When transformations involve rotation.
What do they represent?
Rotation + scaling
Why can real matrices have complex eigenvalues?
Because rotation cannot be captured by real scaling directions.
Why not diagonalizable over R?
Because no real eigenvectors exist.
How are independence, pivots, and solutions connected?
pivots → constraints
no pivots → free variables
free variables → dependence → multiple solutions
Why is dimension the bridge?
It connects algebra (basis size) with geometry (degrees of freedom).
Why is diagonalization “best basis”?
Because it aligns with natural scaling directions.
What does the equation x_{k+1'}= Ax_k represent?
A system evolving step-by-step, where each state is obtained by applying the same linear transformation
What does x_k represent?
The state of the system at time step k
Why is this called a “dynamical system”?
Because it describes how a system changes over time
Why do we write x_0 = c1v1 +…+cnvn?
Because eigenvectors form a basis, so any vector can be expressed in them.
Why is this decomposition important?
Because each component evolves independently under A
What happens after one step?
x1 = Ax0 = c1 lambda1 v1 + … + cn lambda n vn
Each component gets scaled by its eigenvalue
Why is the general formula for x_k so powerful?
It turns a complicated system into simple exponential growth/decay along directions
What determines long-term behavior?
The eigenvalue with largest magnitude (dominant eigenvalue)
Why does the largest eigenvalue dominante?
Because (lambda)^k grows or decays fastest as k → infinity
What does this mean geometrically?
The system eventually points in one direction regardless of starting point
When is the origin an attractor?
When all eigenvalues satisfy |lambda|<1
Why are all eigenvalues have to satisfy |lambda| < 1?
Because all components shrink to 0
When is the origin a repeller?
When all eigenvalues satisfy |lambda|>1
Why is that when all eigenvalues satisfy |lambda| > 1 a repeller?
Because all components grow without bound
When do we get a saddle point?
When one eigenvalue has |lambda| > 1 and another has |lambda| < 1
What happens to a saddle point geometrically?
Some directions are attracted to 0, others are repelled
Which direction attracts in a saddle point?
Eigenvector of small |lambda|
Which direction repels in saddle point?
Eigenvector of larger |lambda|