1/69
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
a linear equation in n variables
a1x1 + a2x2 ... + anxn = b
system of linear equations
any finite collection of the form a1x1 + anxn = b
homogenous
a linear equation of the form a1x1 + anxn = 0
will always have a solution of 0 called a trivial solution
(x1n ... xn) = (0,0,0)
either the trivial solution is unique or there are infinitely many solutions
linear system in 2 variables
a1x + b1y = c1
a2x + b2y = c2
each of the lines defines a line in the x-y axis
one unique solution
the unique intersection point of 2 lines
no solution
parallel lines that are not the same
infinite solution
every point on one line intersects every point on the second line - defines the same line
would mean that any value for the variable would make the equation true
linear systems in 3 variables
a1x1 + b1y + c1z = d1
a2x2 + b2y + c2z = d2
a3x3 + b3y + c3z = d3
3 planes can either have
empty intersection (no solution)
intersect in a single point (one unique solution)
intersect in a line/plane (infinite solutions)
augmented matrix
given a system of m equations and n variables the system can be abbreviated as a matrix
am1 am2 ... amn | bm
elementary row operations
1. (Replacement) Replace one row by the sum of itself and a multiple of another row.
2. (Interchange) Interchange two rows
3. (Scaling) Multiply all entries in a row by a nonzero constant
row echelon form
- first nonzero number is a 1 called a leading one
- rows consisting of all zeros are grouped together at the bottom
- the leading one in a lower row should occur further to the right
reduced row echlon form
satisfies all conditions of REF with each column that contains a leading one containing 0 everywhere else
Gauss-Jordan Elimination
A method of solving a linear system of equations.
1. locate the left most column that is not all zeros
2. if necessary swap two rows to get a nonzero entry at the top of the column in step 1
3. multiply the top row by a constant to give a leading one
4. add multiples of the top row to the lower rows to make all entries in the column from step 1 zero
5. cover the top row and repeat the first 4 steps with the uncovered matrix
6. to reduce begin at the bottom of the matrix and get rid of the values above the leading one
mxn
m is the number of rows
n is the number of columns
square matrix
order nxn
row vector
A = 1xn
A = [a1 .... an]
column vector
A = nx1
A = [b1
bn]
Matrix addition and subtraction
matrices with equal dimensions by adding the corresponding components, which are elements in the same position in each matrix.
(A + B)ij= Aij + Bij
scaler multiplication
(cA)ij = c(A)ij
the constant can be applied to the whole matrix at each component
transpose matrix
any nxm matrix where A^T is defined as interchanging the rows and the columns of A
(A^T)^T = A
(λA)^T = λ(A^T)
(AB)^T=(B^T)(A^T)
(A+B)^T = A^T + B^T
Matrix Multiplication
If 3x3 and 3x3: Multiply row 1 of matrix A by column 1 of matrix B, row 1 by column 2, row 1 by column 3. Congrats! You got your first row. Repeat with rows 2,3. This is [AB]. Matrix multiplication are not commutative.
# columns (n) in first matrix = # rows (m) in second matrix
ex. A is (3x2) B is (2x5) then AB is (3x5)
solution of the system
x̄A = b where x is a column vector
trace
Tr(A) is equal to the sum of n where i equals 1 of aij
properties of trace
tr(λA + B) = λtrA + trB
tr(A+B) = tr(A+B)
tr (AB) = tr(BA)
tr(A^T) = trA
existence of zero divisions
if A is nxm and b,c are nxk, with D kxl
A(B+C) = AB + AC
(B+C)D= BD + CD
A(λB) = λ(AB) = (λA)B
A( a zero matrix) = a zero matrix
the identity matrix
I is the square matrix with ones along the main diagonal and zeros everywhere else
for a square matrix AI = IA = A
inverse of a matrix
suppose A is nxn then there is sometimes some matrix B where AB = I, with B being the inverse of A denoted A^-1
AB = BA = I, making B A's inverse
if A is invertible then the inverse is unique
if A,B have the same size both invertible then AB is also invertible
Invertible Matrix
An n x n matrix A is invertible if there is an n x n matrix C such that CA = I and AC = I, where I is the n x n identity matrix. C, in this case, is the inverse of A.
* a non-invertible matrix is sometimes called a singular matrix, while and invertible matrix is called a nonsingular matrix.
*An n x n matrix is invertible if and only if A is row equivalent to the identity matrix
2x2 matrix inverse
if ad - bc = 0 then a is singular
if ad-bc ≠ 0, then A is invertible and the inverse of A is equal to 1/ad-bc multiplied by the matrix [d, -b, -c, a]
inverse and tranpose
suppose A is invertible then A^T is also invertible
(A^T)^-1 = (A^-1)^T
powers of a matrices
let A be square (nxn) and let k > 0 be a natural number, then we define A^K = {(A)(A)(A)(A)} k number of times
A^0 = I
A^-k = (A^-1)^k
follows the same rules of powers of natural numbers
inverting nxn matrices
1. form an augmented matrix with [A | In]
2. perform gauss jordan elimination on A and mirror the elementary row operations on In
3. two possible outcomes:
a. create a row of zeros on A side meaning A is singular therefore stop A is not invertible
b. if you are able to put a side into RRE without a row of zeros the right side of the augmented matrix will be A^-1
Elementary Matrices
a matrix which can be obtained by applying exactly one row operation to the identity matrix
every elementary matrix is invertible
The following are equivalent theorem
if A is nxn
1. A is invertible
2. Ax̄ = 0 has a unique solution of x̄ = 0
3.RREF of A is the In
4. A may be written as A= E1, E2, ... Ek where Ei's are elementary matricies
5. A is invertible iff Ax̄ = b has a unique solution b
Determine if a system of linear equation (SLE) is consistent
1. Given SLE write the corresponding matrix of Ax̄ = b
- check if A is square - if not step 2
- if A is square check if invertible and x = A^-1(b)
2. form the augmented matrix of the system [A | b]
3. put augmented matrix is RREF to tell if consistent by reading the solution off the matrix
diagonal matrix
a square nxn matrix whose non diagonal entries are zero
if D is diagonal then we can express the matrix as a series of elementary matrix
triangular matrix
A is nxn
1. A is upper triangular if all entries below the main diagonal are zero
2.A is lower triangular if all the entries above the main diagonal are zero
properties of triangular matricies
1. the transpose of an upper triangular matrix is lower triangular and vice versa
2. suppose A, B are upper triangular then so is AB
3. A triangular matrix is invertible if the entries on the main diagonal are nonzero
4. the inverse of an upper triangular matrix is upper triangular
5. suppose A and B are both upper triangular (or lower triangular) then the product of their individual diagonal entries (AB)ii = (Aii)(Bii)
Symmetric Matricies
iff A = A^T with (A)ij = (A)ji for all i greater than or equal to one and all j is less than or equal to n
properties of symmetric matricies
Suppose A,B are nxn and λ is a nonzero scaler
1. A^T is symmetric
2. A+B, A-B are symmetric
3. λA is symmetric
4. the product of 2 symmetric matrices if not necessarily symmetric
invertibility and symmetric matricies
let M be a matrix of any size 9mxm)
1. MM^T and M^TM are symmetric
2. if M is invertible then MM^T and M^TM are both invertible
Determinant of Matrix
numerical invariant about a matrix
2x2 Determinant
det(A) = ad-bc = |A|
A is invertible if det(A) is nonzero
ijth minor of A
det(A[i,j])
obtained by deleting row i and column j from
ijth cofactor of A
Ci,j = (-1)^(i+j) Mij
Mij is the ijth minor
cofactor expansion
1. det(A) can be found by cofactor expansion along the ith row where det(A) = the sum of aikcik with i fixed and k as the rows
- a1i + c2i ... ani + cni
2. det(A) can be found by expanding along the jth row
*try to expand along nice rows or columns that contain zeros in order to cancel the terms
Determinant of a Triangular Matrix
If A is a triangular matrix, then det A is the product of the entries on the main diagonal of A
determinants by row reduction
- if A has a row or column of all zeros then det(A) = 0
- let A and B be 2 square matrices of the same size
- try to make matrix triangular
1. if B is obtained by swapping 2 rows of A then det(B) = -det(A)
2. if B is obtained by multiplying a row by some scaler λ then det(B) = λdet(A)
3. if B is obtained by adding some row of A to another row of A then det(B) = det(A) therefore no change 4. det A^-1 = 1/(det(A))
determinants of elementary matrices
1. if E is obtained by swapping 2 rows of the identity then det(E) = -1
2. if E is obtained by multiplying a row of the identity by a scaler λ, then det(E) = λ
3. if E is obtained by adding a multiple of one row to the identity then det(E) = 1
Skew Symmetric Matrix
A square matrix with Aij=-Aji for all i and j
scaling determinants
det(λA) = λ^n det(A)
- by letting A be nxn λ is brought to the power of n
determinant of sums
det(A) does not equal det(A) + det(B)
determinant of a product
let A,B be nxn then
det(AB) = det(A)det(B)
adjoint matrix
if A is nxn then we know the adjoint of A is det(A)^(n-1)
To find the adjoint of a matrix, first find the cofactor matrix of the given matrix. Then find the transpose of the cofactor matrix.
diagonalized matrix
let A be nxn then we say that AB is diagonalizable if there is an invertible matrix P such that (P^-1)AP = D where D is a diagonal matrix of A
information preserved:
- det(D) = det(A) = det((P^-1)AP)
- if A is invertible then (P^-1)AP is invertible
- the trace of A trA=tr((P^-1)AP)
- characteristic polynomial is the same
characteristic polynomial
det(λI -A)
Finding Eigenvalues
the eigenvalues of λ are the solution to (λI -A)x = 0
- if A is invertible 0 will not be an eigenvalue of A
- if a matrix is upper triangular the eigenvalues are the diagonal entries
Eigenspace
the set of all solutions of Ax = λx where lambda is an eigenvalue, consists of all eigenvectors and the zero vector
a basis for an eigenspace of λ is the smallest set of all vectors, B, that can written in linear combination of the vectors in B
Diagonalizing a Matrix
1. compute the eigenvalues of A
- if A has no repeated eigenvalues then A is diagonalizable
- if there is a repeated eigenvalue, we need some other procedure to check is diagonalizable
2. Pick an eigenvalue and find a basis for the eigenspace
- repeat for each eigenvalue
- if the total number of vectors is less than n A is not diagonalizable
3. let eigenspace vectors become the columns of the the matrix P such that (P^-1)AP = D where D is a diagonal matrix of A
computing matrix powers
A^K = D^K
consistency
If a system has at least one solution, it is said to be consistent . If a consistent system has exactly one solution, it is independent . If a consistent system has an infinite number of solutions, it is dependent
true
Let A be an n×n matrix. The linear system Ax=4x has a solution if and only if A − 4I is an invertible matrix.
true
If A and B are n×n matrices such that AB=In, then BA = In.
true
if A is a square matrix, and if the linear system Ax = b has a unique solution, then the linear system Ax = c also must have a unique solution.
A matrix that is both symmetric and upper triangular must be a diagonal matrix.
the following are equivalent if a matrix is nxn
(a) A is diagonalizable.
(b) A has n linearly independent eigenvectors
A^k
A^k =PD^kP−1
eigenvalue properties
λ + λ is an eigenvalue of A+B
λλ is an eigenvalue of AB
λ^3 is an eigenvalue of A^3
if A^n = 0
Let 𝐴A be a squared matrix, and suppose there exists an 𝑛∈ℕn∈N in a way that 𝐴^n=0 A^n=0. 𝐼−𝐴I−A is invertible and (𝐼−𝐴)−1=𝐼+𝐴+⋯+𝐴𝑛−1