1/96
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
Linear equation
An equation of the form 𝑎1𝑥1+𝑎2𝑥2+⋯+𝑎𝑛𝑥𝑛=𝑏, where 𝑎𝑖 and 𝑏 are constants, and 𝑥𝑖 are variables.
System of linear equations representation
Represented using a coefficient matrix and a constant vector.
Augmented matrix
Matrix obtained by appending the constant terms of a system of equations to the coefficient matrix.
Gaussian elimination
Method to solve linear equations by transforming the augmented matrix into row echelon form.
Row echelon form
Matrix where nonzero rows are above rows of zeros, with leading entry of each nonzero row to the right of the row above.
Reduced row echelon form
Row echelon form where each leading entry is 1 and is the only nonzero entry in its column.
Unique solution in linear equations
Exists if after row reducing the augmented matrix, there are no free variables and the system is consistent.
Pivot position
Location in a row with a leading 1 in row echelon form.
Free variable
Variable that can take any value and is not a leading variable in row echelon form of a matrix.
Consistent system significance
System with at least one solution, while an inconsistent system has none.
Matrix addition
Adding corresponding elements of two matrices of the same dimensions.
Scalar multiplication
Multiplying each entry of a matrix by a scalar.
Matrix multiplication
Taking dot product of rows of the first matrix with columns of the second to produce a new matrix.
Identity matrix
Square matrix with ones on the diagonal and zeros elsewhere, acting as the multiplicative identity.
Transpose of a matrix
Obtained by swapping rows with columns.
Zero matrix
Matrix where all elements are zero.
Diagonal matrix
Square matrix where all off-diagonal elements are zero.
Matrix equality
When matrices have the same dimensions and corresponding entries are equal.
Matrix inverse
Matrix A has an inverse A^-1 such that AA^-1 = A^-1A = I, where I is the identity matrix.
Solution set of linear system
Set of all possible solutions satisfying the system of equations.
Homogeneous system
System where all constant terms are zero, i.e., Ax=0.
Particular solution
Specific solution to a non-homogeneous linear system.
General solution
Set of all solutions, often expressed as a particular solution plus the solution to the associated homogeneous system.
Solution set in terms of free variables
Expressed by solving for basic variables in terms of free variables.
Geometric interpretation of solution set
Intersection of lines in a plane for two linear equations in two variables, or planes in three-dimensional space for three linear equations in three variables.
Infinite solutions in a system
Existence of free variables, forming a line or plane in the solution space.
Pivot columns in solution set
Correspond to basic variables determined by the system, while non-pivot columns correspond to free variables.
Existence of solutions from row echelon form
Determined by checking for rows with all zeros except in the constant column.
Associative property of matrix addition
(A+B)+C = A+(B+C) for matrices A, B, and C of the same dimensions.
Distributive property of matrix operations
Matrix multiplication distributes over addition: A(B+C) = AB + AC and (A+B)C = AC + BC.
Commutative matrix multiplication condition
Matrices A and B commute if AB = BA.
Matrix inverse definition
Matrix A has an inverse A^-1 if AA^-1 = A^-1A = I, where I is the identity matrix.
Inverse of a 2x2 matrix
For a 2x2 matrix A=(a c; b d), the inverse is given by A^-1 = (1/(ad-bc)) * (d -b; -c a), provided ad-bc ≠ 0.
Identity matrix property in multiplication
Acts as the multiplicative identity: AI = IA = A for any matrix A of compatible dimensions.
Similar matrices
Matrices A and B are similar if there exists an invertible matrix P such that B = P^-1AP.
Determining invertibility of a matrix
A matrix is invertible if and only if its determinant is non-zero.
Transpose of a matrix product
(AB)^T = B^T A^T.
Determinant role in matrix inversion
Determinant being non-zero is a condition for matrix inversion.
Determinant
The determinant of a matrix helps determine if the matrix is invertible; a non-zero determinant indicates that the matrix has an inverse.
Elementary Matrix
An elementary matrix is obtained by performing a single row operation on the identity matrix.
Swapping Two Rows
Swapping two rows of a matrix multiplies the determinant by -1.
Multiplying a Row by a Scalar
Multiplying a row by a scalar k multiplies the determinant by k.
Adding a Multiple of One Row to Another Row
Adding a multiple of one row to another row does not change the determinant.
Inverse of an Elementary Matrix
The inverse of an elementary matrix is another elementary matrix corresponding to the inverse of the row operation.
Vector Space
A vector space is a set of vectors along with two operations (vector addition and scalar multiplication) that satisfy certain axioms.
Subspace
A subspace is a subset of a vector space that is itself a vector space under the same operations.
Span of Vectors
The span of a set of vectors is the set of all possible linear combinations of those vectors.
Basis of a Vector Space
A basis is a set of linearly independent vectors that spans the vector space.
Null Space of a Matrix
The null space (or kernel) of a matrix A is the set of all vectors x such that Ax=0.
Row Space
The row space is the span of the rows of the matrix.
Linear Combination
A linear combination is a sum of scalar multiples of vectors.
Rank of a Matrix
The rank is the dimension of the column space (or row space) of the matrix.
Rank-Nullity Theorem
The rank-nullity theorem states that for an m×n matrix A, rank(A) + nullity(A) = n.
Eigenvalue
An eigenvalue λ is a scalar such that Av=λv for some non-zero vector v, called an eigenvector.
Characteristic Polynomial
The characteristic polynomial is det(A-λI), where λ is a scalar and I is the identity matrix.
Diagonalizable Matrix
A matrix is diagonalizable if it can be written as PDP^-1, where D is a diagonal matrix and P is an invertible matrix.
Diagonalization Process
Diagonalization involves finding a matrix P of eigenvectors and a diagonal matrix D of eigenvalues such that A = PDP^-1.
Eigenspace
The eigenspace corresponding to an eigenvalue λ is the set of all eigenvectors associated with λ, along with the zero vector.
Orthogonal Vector
An orthogonal vector
Orthogonal vector
Two vectors are orthogonal if their dot product is zero.
Orthogonal set of vectors
A set of vectors where each pair of distinct vectors is orthogonal.
Orthonormal set of vectors
An orthogonal set where each vector is of unit length (norm 1).
Gram-Schmidt process
An algorithm to orthogonalize a set of vectors in an inner product space.
Projection of a vector onto another vector
The projection of vector v onto u is given by proju v = (v⋅u / u⋅u)u.
Orthogonal complement of a subspace
The set of all vectors orthogonal to every vector in a subspace.
Significance of orthogonality in least squares problems
In least squares problems, orthogonal projection minimizes the distance between the vector and the subspace.
Least squares solution to an overdetermined system
The solution that minimizes the norm of the residual vector b−Ax, found using the normal equation ATAx = ATb.
Relation between orthogonal matrices and orthogonality
An orthogonal matrix Q satisfies QTQ = QQ^T = I, indicating its columns form an orthonormal set.
Linear transformation
A function between vector spaces that preserves vector addition and scalar multiplication.
Kernel (null space) of a linear transformation
The set of all vectors that map to the zero vector under the linear transformation.
Image (range) of a linear transformation
The set of all possible outputs of the linear transformation.
Rank of a linear transformation
The dimension of the image (range) of the transformation.
Nullity of a linear transformation
The dimension of the kernel (null space) of the transformation.
Rank-nullity theorem for linear transformations
States that for a linear transformation T:V→W, rank(T) + nullity(T) = dim(V).
Determining if a transformation is invertible
A transformation is invertible if and only if it is both one-to-one (injective) and onto (surjective).
Effect of composition of linear transformations
The composition of two linear transformations is itself a linear transformation, with a matrix representation as the product of the individual transformation matrices.
Matrix of a linear transformation with respect to given bases
The matrix representation of a linear transformation depends on the chosen bases for the domain and codomain, showing how the transformation acts on basis vectors.
Diagonalization of a matrix
Expressing a matrix A as PDP^−1, where D is a diagonal matrix and P is an invertible matrix.
Determining if a matrix is diagonalizable
A matrix is diagonalizable if it has n linearly independent eigenvectors, where n is the size of the matrix.
Diagonal matrix in diagonalization
Contains the eigenvalues of the matrix on its diagonal in the context of diagonalization.
Importance of eigenvectors in diagonalization
Eigenvectors form the columns of the matrix P used in diagonalization.
Finding eigenvalues of a matrix
Eigenvalues are found by solving the characteristic polynomial det(A−λI) = 0.
Relationship between eigenvalues and the trace of a matrix
The trace of a matrix equals the sum of its eigenvalues.
Relationship between eigenvalues and the determinant of a matrix
The determinant of a matrix equals the product of its eigenvalues.
Applying the spectral theorem to symmetric matrices
States that a symmetric matrix can be diagonalized by an orthogonal matrix.
Significance of eigenvalue decomposition
Aids in simplifying matrix functions and solving differential equations.
Computing powers of a diagonalizable matrix
Powers of a diagonalizable matrix A can be computed as Ak = PD^kP^−1, where D^k is the diagonal matrix with eigenvalues raised to the power k.
Orthogonal diagonalization
Expressing a symmetric matrix A as PDP^T, where D is a diagonal matrix and P is an orthogonal matrix.
Spectral theorem for symmetric matrices
States that every symmetric matrix can be diagonalized by an orthogonal matrix.
Finding an orthogonal matrix for orthogonal diagonalization
Formed from the normalized eigenvectors of the symmetric matrix.
Relationship between eigenvalues and orthogonal matrices in orthogonal diagonalization
Eigenvalues are on the diagonal of the diagonal matrix, and the columns of the orthogonal matrix are the normalized eigenvectors.
Properties of orthogonal matrices
Orthogonal matrices satisfy QTQ = QQ^T = I, indicating their rows and columns are orthonormal.
Verifying if a matrix is symmetric
A matrix is symmetric if it is equal to its transpose, i.e., A = AT.
Significance of orthogonal diagonalization in practical applications
Simplifies matrix computations, including solving differential equations and principal component analysis in statistics.
Role of the Gram-Schmidt process in orthogonal diagonalization
Used to produce an orthonormal basis, aiding in constructing the orthogonal matrix P.
Effect of orthogonality of eigenvectors on matrix computations
Simplifies computations and ensures numerical stability.
Computing the matrix exponential for a diagonalizable matrix
For a diagonalizable matrix A = PDP^−1, the matrix exponential e^A is computed as e^A = Pe^D P^−1, where e^D is the exponential of the diagonal matrix D.