1/67
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
|---|
No study sessions yet.
the pivot of a row…
is the leftmost non-zero entry in that row
a matrix is in row echelon form if…
all rows consisting only of zeroes are at the bottom
the pivot of each non-zero row in the matrix is in a column to the right of the pivot of the row above it
a matrix is in reduced row echelon form if…
the matrix is in echelon form
the pivot of each non-zero row is 1
each pivot is the only non-zero entry in its column
a system of linear equations is called consistent…
if it has at least one solution. if the system has no solutions, it is called inconsistent
let C be the coefficient matrix for a consistent system of linear equations in variables x1, …, xn.
we say that xi is a basic variable for the system if…
we say that xi is a free variable for the system if…
the ith column of RREF(C) has a pivot
if the ith column of RREF(C) does not have a pivot
Rouche-Capelli Theorem:
suppose a system of linear equations has augmented matrix A and coefficient matrix C. then…
the system is inconsistent if and only if the last column of RREF(A) has a pivot
the system has exactly one solution if and only if the last column of RREF(A) does not have a pivot, and every column of RREF(C) has a pivot
the system has infinitely many solutions if and only if the last column of RREF(A) does not have a pivot and RREF(C) has a column without a pivot
a linear combination of vectors v1, v2, …, vn in Rm is…
a vector of the form w = c1v1 + c2v2 + … + cnvn where c1, c2, …, cn are called the coefficients of the linear combination
the span of vectors v1, v2, …, vn in Rm is…
the set Span(v1, …, vn) = {c1v1 + c2v2 + … + cnvn | c1, c2, …, cn in R}. that is, Span(v1, …, vn) is the set of all linear combinations of vectors v1, v2, …, vn
a set of vectors {v1, v2, …, vn} in Rm is called linearly dependent if…
at least one of the vectors is a linear combination of the others. that is, for at least one i in {1, …, n}, we have Vi in Span(v1, v2, …, vi-1, vi+1, …, vn). otherwise, the vectors are linearly independent
a vector space is any set of vectors V in Rm that satisfies all of the following properties:
V is non-empty (i.e. the zero-vector is in V)
V is closed under vector addition: for every v, w in V, v + w is in V.
V is closed under scalar multiplication: for every v in V and real number c, cv is in V
let V be a vector subspace of Rn. a spanning/generating set for V is…
any subset B of V such that V=Span(B)
a subset B of a vector space V is called a basis if…
B is a spanning set for V
B is linearly independent
let V be a non-zero vector subspace of Rn. then, the dimension of V, dim(V), is…
the size of any basis for V
the standard basis for Rn is the set ε := {e1, e2, …, en} where ei is…
the vector with 1 in the ith coordinate and 0 in all other coordinates
let A be a mxn matrix with column vectors A = [v1 v2 … vn]. then, for every vector x in Rn, the matrix-vector product of A and x is…
the vector in Rm defined by Ax := x1v1 + … + xnvn
let A be a mxn matrix. then, the matrix transformation associated to A is the function:
TA: Rn→Rm defined by TA(x) := Ax
a function F: Rn→Rm is called linear if it satisfies the following properties for all vectors v, w in Rn and scalars c in R:
F(v + w) = F(v) + F(w)
F(cv) = cF(v)
let F: Rn→Rm be a linear transformation. then the defining matrix of F is the mxn matrix M satisfying:
F(x) = Mx, for all vectors x in Rn, denoted M = MF
a function f: X→Y is called injective if:
for every y in Y, there is at most one input x in X such that f(x) = y
a function f: X→Y is called surjective if:
for every y in Y, there is at least one input x in X such that f(x) = y
a function f: X→Y is called bijective if:
f is both injective and surjective
let V be a subspace of Rn and W a subspace of Rm. an isomorphism between V and W is…
any linear bijective map F: V→W. if an isomorphism exists between two vector spaces, we say these spaces are isomorphic, and write V ≅ W
let F: Rn→Rm. the kernel of F is the subset ker(F)⊆Rn defined by:
ker(F) := {x in Rn | F(x) = 0}
let F: Rn→Rm. the image of F is the subset im(F)⊆Rm defined by:
im(F) := {F(x) | x in Rn}
let F: Rn→Rm. the rank of F is…
the dimension of im(F), denoted rank(F)
let F: Rn→Rm. the nullity of F is…
the dimension of ker(F), denoted nullity(F)
let A be a mxn matrix with column vectors A = [v1 … vn]. then, the column space of A is…
the subspace of Rm given by Col(A) := Span(v1, …, vn)
let A be a mxn matrix with column vectors A = [v1 … vn]. then, the null space of A is…
the subspace of Rn given by Nul(A) := {x in Rn | Ax = 0}
let A be a matrix. the nullity of A is…
the dimension of Nul(A), denoted nullity(A)
let A be a matrix. the rank of A is…
the dimension of Col(A), denoted rank(A)
a system of linear equations is called homogeneous if:
the constant coefficients are all equal to zero
let A = [v1 … vn] and B = [w1 … wn] be mxn matrices. the sum of A and B is…
the mxn matrix A + B := [v1 + w1 … vn + wn]
let A = [v1 … vn] be a mxn matric and c a real number. the scalar product of A with c is…
the mxn matrix cA := [cv1 … cvn]
let A be a mxk matrix and B = [b1 … bn] be a kxn matrix. then, the matrix product of A and B is…
the mxn matrix AB = [Ab1 … Abn]
the identity matrix In is…
the nxn matrix In = [e1 … en]
let A be a nxn matrix. the inverse of A, if it exists, is…
the matrix B such that AB = BA = In, where B = A-1
an nxn matrix is called elementary if:
it can be obtained by performing exactly one operation to the identity matrix
the unit square is the subset of R² given by…
S := {a1e1 + a2e2 : 0 <= a1, a2 <= 1}
an ordered basis {b1, b2} for R² is called positively oriented if:
we can rotate b1 counterclockwise to reach b2 without crossing the line spanned by b2
let F: R²→R² be a linear transformation. then, the determinant of F, denoted by det(F) is…
the oriented area of F(S). that is, det(F) :=
area(F(S)), if {F(e1), F(e2)} is positively oriented
-area(F(S)), if {F(e1), F(e2)} is negatively oriented
0, if area(F(S)) = 0
if A is a 2×2 matrix, the determinant of A, denoted by det(A) is…
the determinant of the matrix transformation TA. that is, det(A) := det(TA)
the unit cube is the subset of R³ given by…
C := {a1e1 + a2e2 + a3e3 : 0 <= a1, a2, a3 <= 1}
for a nxn matrix A = (aij), the ij-minor of A is…
the (n-1)x(n-1) matrix Aij with the ith row and jth column deleted
let A be the nxn matrix with ij-entry equal to aij. we define the determinant of A by the following cofactor expansion formula:
det(A) := a11det(A11) - a12det(A12) + … + (-1)n+1a1ndet(A1n)
let A be a nxn matrix. a non-zero vector v is an eigenvector of A if:
there exists a real number λ such that Av = λv. the scalar λ is called an eigenvalue of A.
let A be a nxn matrix with eigenvalue λ. the λ-Eigenspace of A is…
the vector subspace of Rn defined by Eλ := Nul(A - λIn)
let A be a nxn matrix with eigenvalue λ. the geometric multiplicity of λ is…
the dimension of the λ-eigenspace, dim(Eλ)
for a nxn matrix A, the characteristic polynomial of A is…
XA(x) := det(A - xIn)
let B = {v1, …, vn} be an ordered basis for a vector space V. recall that every vector x in V can be written in the form x = x1v1 + … + xnvn. the B-coordinates of x is the vector in Rn given by…
[x]B = [x1 … xn]
let C and B be bases for a vector space V. then the change of base matrix MC←B is the matrix satisfying…
MC←B[x]B = [x]C, for every vector x in V
let C be a basis for a vector space V. then for any vector x and y in V and real number scalar k…
[x + y]C = [x]C + [y]C and [kx]C = k[x]C
let {b1, …, bn} be a linearly independent subset of a vector space V. then, for any basis C of V, the set {[b1]C, …, [bn]C} is…
also linearly independent
let F: Rn → Rn be a linear transformation, and B be any basis for Rn. then, the defining matrix of F with respect to the basis B is…
the matrix M so that [F(v)]B = M[x]B, where M = MF,B
two nxn matrices B and C are called similar if:
they represent the same function, but in possibly different bases. that is, there is a single linear function F: Rn → Rn so that MF,B = B and MF,C = C, where B and C are bases for Rn
a matrix D is called diagonal if:
the only non-zero entries in the matrix appear on the diagonal. we write D = diag(λ1, …, λn)
an nxn matrix A is called diagonalizable if:
it is similar to a diagonal matrix
suppose that A is a nxn diagonalizable matrix with eigenvalues λ1, …, λn and corresponding linearly independent eigenvectors v1, …, vn. we call the equality
A = CDC-1 the eigendecomposition of the matrix A
the dot product of u and v is the scalar…
u • v := u1v1 + u2v2 + … + unvn
let u and v be vectors in Rn. The norm of a vector u in Rn is…
||u|| := √(u • u)
let u and v be vectors in Rn. the distance between vectors u and v is…
d(u, v) := ||u - v||
let u and v be vectors in Rn. we say that u and v are orthogonal if…
u • v = 0
a basis B = {v1, v2, …, vn} is orthogonal if…
vi • vj = 0 for every i ≠ j
a basis B is called orthonormal if its orthogonal and…
||vi|| = 1 for every vi in B
we call an nxn matrix Q orthogonal if…
its column vectors form an orthonormal basis for Rn
for vectors x, y in Rn, the orthogonal projection of x onto y is…
projyx : [(x • y) / (y • y)] * y
an nxn matrix A is called orthogonally diagonalizable if…
there exists an orthogonal matrix Q and diagonal matrix D such that QTAQ = D
suppose that A is an nxn symmetric matrix with eigenvalues λ1, …, λn and orthonormal basis of eigenvectors {v1, …, vn}. we call the equality
A = QDQT a spectral decomposition of A, where D = diag(λ1, …, λn) and Q = [v1 … vn]
let A be a mxn matrix and {v1, …, vn} be an orthonormal basis for Rn of eigenvectors for ATA, as above. the singular values of A are…
σi := ||Avi||