1/55
Flashcards covering topics from a linear algebra course, including linear equations, matrices, vector spaces and diagonalization.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Linear Equation
An equation of the form a1x1 + a2x2 + . . . + anxn = b where a1, a2, . . . , an, b ∈ R are called the coefficients.
Homogeneous Linear Equation
A linear equation on the variables x1, x2, . . . , xn is called homogeneous if it is of the form a1x1 + a2x2 + . . . + anxn = 0.
System of Linear Equations
A set of linear equations on the variables x1, x2, . . . , xn.
Homogeneous System of Linear Equations
A system of linear equations is called homogeneous if it consists of only homogeneous linear equations.
Solution to a Linear Equation
Given a linear equation a1x1 + a2x2 + . . . + anxn = b on the variables x1, x2, . . . , xn, a solution is an n-tuple of real numbers (s1, s2, . . . , sn) such that a1s1+a2s2+. . .+ansn = b.
Solution to a System of Linear Equations
Given a system of linear equations on the variables x1, x2, . . . , xn, a solution to the system is an n-tuple of real numbers (s1, s2, . . . , sn) that is a solution to each equation of the system.
Number of Solutions to a Homogeneous Linear System
A homogeneous system of linear equations admits either exactly one solution or infinitely many of them.
Number of Solutions to a General Linear System
A system of linear equations admits either no solutions or exactly one solution or infinitely many of them.
Row Echelon Matrix
A matrix is said to be row echelon if it satisfies the following conditions: i) Rows with only zeros (if any) are at the bottom of the matrix. ii) If we call k-th pivot the first non zero coefficient of the k-th row (from left to right), then any pivot must be strictly to the right of the pivot above it.
Reduced Echelon Matrix
A matrix is said to be reduced echelon if, moreover, it satisfies two additional conditions: iii) All pivots are equal to 1. iv) In any column containing a pivot, all other coefficients must be equal to zero.
Basic and Free Variables
Consider a linear system represented by means of a echelon matrix (eventually reduced). The variables corresponding to pivot columns are called basic variables whereas the variables corresponding to non-pivot columns are called free variables.
Equivalent Linear Systems
Consider two different systems (S1) and (S2) of linear equations on the variables x1, x2, . . . , xn. We say that (S1) and (S2) are equivalent, denoted by (S1) ⇐⇒ (S2), if they have the same set of solutions.
Elementary Row Operations on a Linear System
There are three elementary row operations that transform a linear system into an equivalent one: 1. Multiplication of a row by a scalar: Lk ← λLk where λ ∈ R∗ . 2. Row permutation: Lk ←→ Lj . 3. Addition of a multiple of a given row: Lk ← Lk + λLj where λ ∈ R.
Vector Space over R
A set E is called a real vector space if it is equipped with two operations: Internal addition and External multiplication such that it satisfy certain axioms.
Linear Combination
Let E be a real-vector space and let u1, u2 be two vectors of E. Then, a linear combination of u1 and u2 is a vector of the form: αu1 + βu2, where α, β ∈ R.
Vector Subspace
Let E be a vector space and F ⊂ E a non-empty subset of E. We say that F is a vector subspace if “it is stable under linear combinations”.
Linear Span of a Family of Vectors
Let E be a vector space and F = u1, . . . , un} a finite family of vectors in E. The subset of all linear combinations of vectors of F is a vector subspace called the linear span of F (also: “the subspace generated by F”) and denoted by Span(F).
Sum of Vector Subspaces
Let E be a vector space and F1, F2 two vector subspaces of E. Then, the sum F1 + F2 is the vector subspace of E defined by F1 + F2 = u ∈ E | ∃v1 ∈ F1, ∃v2 ∈ F2, u = v1 + v2.
Direct Sum, Supplementary Subspaces
Let E be a vector space and F1, F2 two vector subspaces of E. We say that F1 and F2 are in direct sum (or that F1 and F2 are supplementary), denoted E = F1 ⊕ F2, just in case we have: 1. E = F1 + F2. 2. F1 ∩ F2 = {0}.
Spanning Set
A spanning set is a family of vectors from which one can build all other vectors in the vector space. More precisely, let E be a vector space and F be a family of vectors in E. Then F is a spanning set of E if we have Span(F) = E.
Linearly Independent Set
A linearly independent set is a family of vectors such that none of them can be written as linear combination of the other vectors of the family. More precisely, let E be a vector space and F = {u1, . . . , un} be a finite family of vectors. Then F is a linearly independent set if we have λ1u1 + . . . + λnun = 0 =⇒ λ1 = . . . = λn = 0.
Basis
Let E be a vector space. A basis of E is a family of vectors that is both a spanning set and a linearly independent set.
Finite-Dimensional Vector Spaces
A vector space E is said to be finite-dimensional if it has a finite spanning set.
Dimension of a Vector Space
Let E be a finite-dimensional vector space. The dimension of E, denoted dim(E), is the number of vectors of any basis of E.
Matrix Associated to a Family of Vectors
Let E be an n-dimensional vector space, B = b1, . . . , bn a basis of E and F = f1, . . . , fp a finite family of vectors in E. Then the matrix associated to the family F in the basis B, denoted M atB(F), is the matrix of p columns and n rows where the coefficients of the j-th column are the coordinates in the basis B of the vector fj .
Linear Map
A linear map is a map between vector spaces that respects the two fundamental operations of linear algebra (internal addition and external multiplication). More precisely, let A and B be two vector spaces. A map f : A −→ B is said to be a linear map if: ∀u1, u2 ∈ A, ∀α, β ∈ R, f(αu1 + βu2) = αf(u1) + βf(u2).
Endomorphism
An endomorphism is a linear map from a vector space A to itself.
Isomorphism
An isomorphism is a linear map that is also a bijection.
Automorphism
An automorphism is a linear map that is both an endomorphism and an isomorphism.
Image of a Linear Map
Given a linear map f : A −→ B, its image Im(f) is the subset of the codomain B defined by Im(f) = b ∈ B | ∃a ∈ A, f(a) = b.
Rank of a Linear Map
Given a linear map f : A −→ B, the rank of f, denoted rank(f), is the dimension of Im(f).
Kernel of a Linear Map
Given a linear map f : A −→ B, its kernel Ker(f) is the subset of the domain A defined by Ker(f) = a ∈ A | f(a) = 0B
Matrix Associated to a Linear Map
Let A and B be two finite-dimensional vector spaces. Let f : A −→ B be a linear map. Let A = {a1, a2, . . . , an} be a basis of A and B = {b1, b2, . . . , bp} be a basis of B. Then, the matrix associated to the linear map f in the bases A and B, denoted M atBA(f) is defined as the matrix associated to the family f(A) in the basis B.
The Rank-Nullity Theorem
Let f : A −→ B be a linear map between two finite-dimensional vector spaces. Then, we have rank(f) + dim(Ker(f)) = dim(A) .
Matrix
A matrix M is simply a table of p rows and n columns where each element is a real number.
Column Vector, Square Matrix
A matrix is called a column vector if it has only one column and a square matrix if it has as many rows as columns.
Diagonal and Triangular Matrices
A square matrix M is called • diagonal if all its non-diagonal elements are 0. • upper triangular if all the elements below the diagonal are 0. • lower triangular if all the elements above the diagonal are 0.
Multiplication by a Scalar
Let M be a matrix with p rows and n columns and let λ be a real number. Then λM is the matrix found by multiplying all elements of M by λ: (λM)ij = λMij .
Addition of Matrices
Let M and N be two matrices of the same size. Then one can define their sum M + N by (M + N)ij = Mij + Nij .
Product of a Matrix and a Vector
Let M ∈ Mpn(R) be a matrix with p rows and n columns. Denote by C1, C2, . . . , Cn the n columns of M. Moreover, let X ∈ Rn be a column vector with n rows. Then, the product MX is the vector column with p rows defined by MX = x1C1 + x2C2 + . . . + xnCn.
Proudct of Two Matrices
Let M ∈ Mpn(R) be a matrix with n columns and N ∈ Mnk(R) a matrix with n rows. Denote by N1, N2, . . . , Nk the columns of N. Then, the product MN is the matrix with p rows and k columns whose columns are the vectors MN1, MN2, . . . , MNk.
Identity Matrix
The square matrix of size n with 1 in all diagonal elements and 0 everywhere else is called the identity matrix and denoted In.
Invertible Matrix
Let M be a square matrix of size n. We say that M is invertible if there exists another square matrix N of size n such that MN = NM = In.
Transpose of a Matrix
Let M ∈ Mpn(R) be any matrix with p rows and n columns. Then, the transpose of M, denoted Mt ∈ Mnp(R), is the matrix with n rows and p columns whose columns are the rows of M. More precisely, we have (Mt)ij = Mji.
Symmetric and Skew-Symmetric Matrices
A square matrix M ∈ Mnn(R) is said to be symmetric if Mt = M and skew- symmetric if Mt = −M.
Trace of a Square Matrix
Let M ∈ Mnn(R) be a square matrix. Then, the trace of M, denoted T r(M), is defined as the sum of all the diagonal elements of M: T r(M) = n k=1 Mkk.
Determinant of a 2x2 Matrix
For a 2 × 2 matrix M = a b c d, its determinant is the real number det(M) = a b c d = ad − bc.
General Definition of the Determinant
Consider a general square matrix of size n M . Denote by Mi,j the square matrix of size n − 1 obtained by deleting row i and column j from the matrix M. Then, we have det(M) =n k=1 (−1)k+1m1k det(M1,k) =m11 det(M1,1) − m12 det(M1,2) + . . . + (−1)nm1n det(M1,n).
Determinant of a Triangular Matrix
Let T ∈ Mnn(R) be a triangular square matrix. Then the determinant of T is the product of its diagonal terms: det(T) = n k=1 Tkk.
Eigenvectors, Eigenvalues, Eigenspaces
Let E be a vector space and f : E −→ E be an endomorphism on E. Whenever we find a non-zero vector v ∈ E and a value λ ∈ R such that f(v) = λv, we say that v is an eigenvector of f and λ is an eigenvalue of f. The set of all eigenvalues of f is called the spectrum of f and is denoted Spec(f). Given an eigenvalue λ ∈ Spec(f), the set Eλ of all vectors satisfying f(v) = λv is called the eigenspace of f associated to the eigenvalue λ.
Geometric Multiplicity of an Eigenvalue
Let E be a vector space, f an endomorphism on E and λ ∈ R an eigenvalue of f. Then, the dimension of Eλ is called the geometric multiplicity of λ.
Characteristic Polynomial of an Endomorphism or Matrix
Let f be an endomorphism on a finite-dimensional vector space E. The characteristic polynomial of f is the quantity Pf (x) = det(f − xidE), where x ∈ R.
Algebraic Multiplicity of an Eigenvalue
Let f be an endomorphism on a finite-dimensional vector space E and λ ∈ spec(f) an eigenvalue of f. Then, the multiplicity of λ as a root of the characteristic polynomial Pf is called the algebraic multiplicity of λ.
Diagonalizable Endomorphism
Let E be a finite-dimensional vector space and f be an endomorphism on E. We say that f is diagonalizable if there exists a basis B such that matB(f) is a diagonal matrix.
Diagonalizable Matrix
Let M be a square matrix of size n. We say that M is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that M = P DP −1 .
Transition Matrix
Let A be a finite dimensional vector space, B and C two bases of A. The transition matrix representing the basis C in the basis B, denoted PBC, is the matrix whose columns are the coordinates of the vectors of C in the basis B.