Summarised linear algebra
COURSE OUTLINE
This document will cover various topics in Linear Algebra, including Definitions and Methods associated with Linear Equations, Matrices, Determinants, Vector Spaces, Linear Transformations, Orthogonality, Characteristic Roots and Vectors, Quadratic Forms, and their Applications.
Linear Equations
Solution of systems of linear equations by the Gauss-Jordan Method.
Homogeneous and non-homogeneous systems: Definitions and distinctions, where a homogeneous system is one where the constant solution is zero.
Linear dependence of equations: Understanding when a set of equations is dependent or independent.
Matrices
Matrix form of a linear equation: Transition from equations to matrix representation.
Addition and scalar multiplication of matrices: Fundamental operations on matrices.
Matrix multiplication: Rules and methods for multiplying matrices.
Inverse of a matrix: Definition, properties, and methods to find the inverse when it exists.
Elementary matrices: Matrices derived from the identity matrix through elementary row operations.
Elementary row and column operations: Operations used to simplify matrices.
Equivalence of matrices: Conditions under which matrices are equivalent.
Normal form: Refers to matrix forms that simplify further computations.
Determinants
Properties of determinants: Exploration of various properties and rules governing determinants.
Cofactors: Calculation method related to minor matrices used to find the determinant.
Computation of determinants: Different methods including expansion by rows and columns.
Expansion by row and column adjoint matrix: Techniques to compute determinants.
Cramer's rule: A method to solve systems of linear equations using determinants.
Rank of a matrix: Understanding the dimension of the row or column span.
Vector Spaces
Linear independence: The concept of a vector set not being expressible as linear combinations of others.
Subspaces: Definitions and properties that lead to identifying subspaces within a greater vector space.
Spanning sets: Sets of vectors that can generate other vectors through linear combinations.
Basis: A set of vectors that are linearly independent and span a vector space.
Dimension: The number of vectors in a basis for the vector space.
Coordinates and coordinate transformation: Representation of vectors with respect to different bases.
Linear Transformations
Matrix representation of transformations: How linear transformations can be viewed in matrix form.
Composition of linear transformations: Understanding how sequences of transformations can be combined.
Effect of change of basis on matrix transformation: How transformations behave when the basis is changed.
Orthogonality
Inner Product spaces: Definitions and properties of inner products in vector spaces.
Length and angle: Measurement concepts in vector spaces.
Orthogonal vectors: Vectors that are perpendicular to each other in inner product spaces.
Orthogonal bases: Bases consisting of mutually orthogonal vectors.
Gram-Schmidt orthogonalization: A process to convert a set of vectors into an orthogonal set.
Orthogonal and unitary matrices: Properties and applications of these matrices, particularly in transformation contexts.
Orthogonal coordinate transformation: How to transform coordinates while preserving orthogonality.
Characteristic Roots and Vectors
Eigenvalues and Eigenvectors: Definitions, properties, and methods to compute.
Diagonalization of matrices: Conditions and methods to diagonalize given matrices.
Real-symmetric and Hermitian matrices: Special types of matrices relevant to fields of study.
Diagonalization by orthogonal or unitary matrix: How matrices can be diagonalized with orthogonal or unitary transformations.
Quadratic Forms
Congruence: Condition under which quadratic forms remain invariant under certain transformations.
Diagonalization and canonical forms: Methods to simplify quadratic forms into more manageable structures.
Rank and index, definite and semi-definite forms: Discussing classification of quadratic forms.
Applications to conic sections: How quadratic forms relate to geometric figures like conics.
Reference texts
Basic Algebra. Jacobson, I.N. 1985. W.H. Freeman.
Linear Algebra; An Introduction, Morris, A.O. 1978. Van Nostrand
Introduction to Matrices and Linear Transformations, 2nd Ed. Finkerbeiner II, D.T. 1966 W.H. Freeman.
CHAPTER 1
1.1 INTRODUCTION
The most general linear equation can be written as:
a1x1 + a2x2 + … + anxn = b
Where there are n unknowns $x1, x2, …, xn$ and $a1, a2, …, an, b$ are all known numbers. A solution set for this equation is defined as the collection of values $(t1, t2, …, tn)$ such that substituting $xi = t_i$ for all $i$ holds true in the equation.
Example 1.1:
Find the solution set for the following equation:
7x1 - rac{5}{9} x2 = -1
Solution:
From the equation, rearranging gives:
x1 = rac{5}{63} x2 - rac{1}{7}
Let $x2 = t elong eal$, thus we can express the general solution as: x1 = rac{5}{63} t - rac{1}{7}.
Definition 1.1.1: Field
A set K is called a field if it satisfies the following axioms:
(i) ADDITION AXIOMS
Closure Law: For all $a, b elong K$, then $a + b elong K$.
Associative Law: For all $a, b, c elong K$, then $(a + b) + c = a + (b + c)$.
Existence of additive identity: There exists a real number zero (0) such that $a + 0 = 0 + a = a$ for all $a elong K$.
Existence of additive inverse: For all $a elong K$, $ here exists -a elong K$ such that $a + (-a) = (-a) + a = 0$.
Commutative Law: For all $a, b elong K$, then $a + b = b + a$.
(ii) MULTIPLICATION AXIOMSClosure Law: For all $a, b elong K$, then $a imes b elong K$.
Associative Law: For all $a, b, c elong K$, then $(a imes b) imes c = a imes (b imes c)$.
Existence of Multiplication Identity: There should exist a number in a given set such that $a imes 1 = a$ for all $a elong K$.
Existence of multiplicative inverse: For all $a elong K$, there exists $b elong K$ such that $a imes b = b imes a = 1$ meaning $b = a^{-1} = rac{1}{a}$.
Commutative Law: For all $a, b elong K$, then $a imes b = b imes a$.
Distributive Law: For all $a, b, c elong K$, then $a(b + c) = ab + ac$ and (b + c)a = ba + ca.
Note: Chain of Numbers
atural ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ }
atural ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } extbf{ } ext ……………
ewline
$
atural ext { }
ightarrow ext { } ext{ } ext } ext{ } ext{ } ext{ }$ ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ } ext{ }$,$.K$ = {real, complex, rational, etc.}
1.2 ELEMENTARY ROW OPERATION ON MATRICES
The field of Real numbers $
eal$ or Complex numbers $
egular$ will be assumed.
Definition 1.2.1: Matrix
A rectangular array of elements of K, namely:
egin{pmatrix}
eta{11} & eta{12} & ext{…} & eta{1n} \ eta{21} & eta{22} & ext{…} & eta{2n} \
ext{…} & ext{…} & ext{…}& ext{…} \
eta{m1} & eta{m2} & ext{…} & eta{mn} \ ext { } ewline ext ; ext ext ext={ ext{ each row is $ri$ and in it, }}i=1, 2, ext{ and column is } c_j ; j=1, 2, ext{… until n.
Example 1.2.1:
A = egin{pmatrix} 1 & 0 & -3 & 1 \
2 & 1 & 3 & 1 \
0 & 1 & 1 & ….
ext{ }
ewlinetext is a 3×4 matrix, where $(2, 1, 3, 1)$ is a row of the matrix A, and $(−3, 3, 1)$ is a column of the matrix A.
Definition 1.2.2: Elementary Row Operations
On the matrix $A = ( au_{ij})$:
(1 ≤ $i ≤ m$, $ au
e 0$) Multiply the $i$th row of A by $ au$.For any $i, j$ (1 ≤ $i,j ≤ m$; $i
e j$) Add $ au$ times the $j$th row to the $i$th row of A.Interchange the $i$th and $j$th rows.
In notation:
$ri o au ri$
$rj o rj + au r_i$
$ri ext{ <-> } rj $.
Definition 1.2.3: Echelon Matrix
An $m imes n$ matrix $A ext{ is called an echelon matrix } if
The first nonzero element in each non-zero row is one.
The leading one in any non-zero row occurs to the right of the leading one in any preceding row.
The non-zero rows appear before the zero rows.
An echelon matrix is called a reduced echelon matrix if in addition,The leading one in any non-zero row is the only non-zero element in the column in which that one occurs.
Definition 1.2.4: Row Equivalent Matrices
If $A$ and $B$ are two $m imes n$ matrices, then $A$ is row equivalent to $B$ if $B$ can be obtained from $A$ by a finite sequence of elementary row operations.
Examples 1.2.1: Find the reduced echelon matrices of the following matrices
(a) $(1, -1, 1,1; 1, 2, 1, 2, -1)$
Perform the necessary row operations to achieve the reduced echelon form.
(b) …
(c) …
CHAPTER 2
2.1 MATRIX ALGEBRA
Definition 2.1.1: Equality of Matrices
If $A = ( au{ij})$ is an $m imes n$ matrix and $B = (eta{ij})$ is a $p imes q$ matrix, then $A = B$ if and only if $m = p, n = q$ and $ au{ij} = eta{ij}$ for $i = 1, ext{ to m}$ and $j = 1, ext{ to n}$.
Definition 2.1.2: Zero Matrix
The $m imes n$ matrix $0=( au{ij})$ such that $ au{ij} = 0$ for $i=1, …, m$, $j=1, …, n$ is called the zero matrix.
Definition 2.1.3: Matrix Addition
If $A=( au{ij})$ is an $m imes n$ matrix and $B =(eta{ij})$ is a $p imes q$ matrix then $A + B$ is defined if and only if $m = p, n = q$, and then $A + B = ( au{ij} + eta{ij})$ which is the $m imes n$ matrix obtained by adding the corresponding elements in A and B.
Definition 2.1.4: Scalar Multiplication
The product of a scalar $ au$ by the matrix A, written $ au imes A$ or $ au A$, is the matrix obtained by multiplying each entry of A by $ au$. Also note that $−A = -1 imes A$ and $A - B = A + (-B)$.
Theorem 2.1.1: Matrix Properties
Let $V$ be the set of all $m imes n$ matrices over a field K. Then for any matrices $A,B,C elong V$ and any scalars $ au1, au2 elong K$:
$(A + B) + C = A + (B + C) ext{ }$(Associative)
$A + 0 = A$ ext{ } (Identity)
$A + (-A) = 0$ ext{ } (Inverses)
$A + B = B + A$ ext{ } (Commutative)
$ au1(A + B) = au1A + au_1B$ ext{ } (Distributive)
$( au1 + au2)A = au1A + au2A$
$( au1 au2)A = au1( au2A)$
$1.A = A ext{ }$ and $0A = 0
Definition 2.1.8 Transpose of a Matrix
If $A = ( au{ij})$ is an $m imes n$ matrix, the transpose of A is the $n imes m$ matrix denoted by $A^t$ obtained by interchanging the rows and columns of A i.e. A^t = (eta{ij})$$ where $eta{ij} = au{ji}$ for $i = 1, …, n; j = 1, …, m$.
Additional Notes
Further properties of matrices, eigenvalues, characteristic polynomials, norms, etc., are essential for further studies in linear algebra and its applications.