Comprehensive Notes on Linear Algebra, Ordinary Differential Equations, and Numerical Methods
General Information and Authors
These notes are on Mathematics - 1021, authored by Peeyush Chandra, A. K. Lal, V. Raghavendra, and G. Santhanam.
Supported by a grant from MHRD.
Part I: Linear Algebra
Chapter 1: Matrices
Definition and Notation
Matrix: A rectangular array of numbers, mostly concerned with real numbers.
Rows and Columns: Horizontal arrays are rows; vertical arrays are columns.
Order: A matrix with rows and columns has order .
Representation: Denoted by where is the entry at the row and column.
Vectors: A matrix with one column is a column vector; one row is a row vector.
Equality: Matrices and of the same order are equal if for all .
Special Matrices
Zero Matrix: Each entry is zero, denoted by .
Square Matrix: Number of rows equals the number of columns ().
Diagonal Entries: In a square matrix of order , entries form the principal diagonal.
Diagonal Matrix: Entry for all . Denoted by .
Scalar Matrix: A diagonal matrix where all diagonal entries are equal ().
Identity Matrix (): A square matrix where and for .
Triangular Matrices: - Upper Triangular: for i > j. - Lower Triangular: for i < j.
Operations on Matrices
Transpose: For an matrix , the transpose is an matrix where entry . - Property: .
Addition: Defined for matrices of the same order as . - Properties: Commutative () and Associative ().
Scalar Multiplication: For , .
Additive Inverse: such that .
Matrix Multiplication: - For () and (), the product () is defined by . - Product is defined only if the number of columns in equals the number of rows in . - Commutativity: Generally . If , they commute. - Associativity: . - Distributive law: . - Multiplication by Identity: . - Multiplication by Diagonal Matrix: In , the row of is multiplied by . In , the column of is multiplied by .
Transpose Laws: and .
More Special Matrices
Symmetric: .
Skew-Symmetric: . Diagonal entries must be zero.
Orthogonal: .
Nilpotent: There exists a positive integer such that . The least such is the order of nilpotency.
Idempotent: .
Trace: For a square matrix, . - Laws: and .
Block Matrices
Matrices can be decomposed into smaller blocks (submatrices).
If and B = egin{pmatrix} H \ K egin{pmatrix}, then .
Block addition and multiplication require compatible partitions.
Matrices over Complex Numbers
Conjugate (): Replacing entries with their complex conjugate .
Conjugate Transpose (): Transpose of the conjugate matrix, .
Hermitian: .
Skew-Hermitian: .
Unitary: .
Normal: .
Chapter 2: Linear System of Equations
Introduction to Linear Systems
A linear system of equations in unknowns is .
is the coefficient matrix; is the augmented matrix.
Homogeneous System: . Always has the trivial solution .
Solution Set: Can have a unique solution, infinite solutions, or no solution.
Elementary Operations and Equivalent Systems
Elementary Row Operations: - : Interchange of rows and . - : Multiply row by non-zero constant . - : Replace row by (row + row ).
Row-Equivalent: Two matrices are row-equivalent if one is obtained from the other via elementary row operations.
Lemma: Equivalent linear systems have the same set of solutions.
Gauss Elimination and Echelon Forms
Gauss Elimination (Forward Elimination): Reducing the augmented matrix to an upper triangular form.
Row Reduced Form: - First non-zero entry in each row is 1 (Leading Term). - Column containing the leading 1 has zeros elsewhere.
Row Reduced Echelon Form (RREF): - Satisfies row reduced form. - Zero rows are at the bottom. - Leading 1s appear in a staircase pattern (left to right).
Variables: - Basic Variables: Variables corresponding to leading columns. - Free Variables: Variables not corresponding to leading columns.
Gauss-Jordan Elimination: Includes forward elimination and back substitution to reach RREF.
Elementary Matrices
Obtained by applying a single elementary row operation to the Identity matrix.
Multiplying on the left by an elementary matrix performs the corresponding operation on .
Multiplying on the right performs column transformations.
Rank of a Matrix
Row-rank: Number of non-zero rows in the row reduced form.
Rank: Row-rank equals Column-rank. Denoted .
Theorem: If , there exist elementary matrices such that PAQ = egin{pmatrix} I_r & 0 \ 0 & 0 egin{pmatrix}.
Consistency Theorem (Ax = b)
Let and .
Infinite Solutions: If r = r_a < n. Solution space format: .
Unique Solution: If .
No Solution (Inconsistent): If r < r_a.
Homogeneous Systems: has non-trivial solutions if and only if ext{rank}(A) < n.
Invertible Matrices
Definition: Square matrix is invertible if there exists such that .
Uniqueness: The inverse is unique.
Properties: - . - . - .
Equivalent conditions for Invertibility: - is invertible. - (Full rank). - RREF of is . - is a product of elementary matrices. - has only the trivial solution. - is consistent for every .
Calculating Inverse: Apply Gauss-Jordan to to get .
Determinants
Definition: Inductively defined for square matrices.
Notation: where follows deleting row 1 and column .
Minor (): Determinant of submatrix obtained by deleting row and column .
Cofactor (): .
Properties: - Interchanging two rows changes the sign. - Multiplying a row by multiplies the determinant by . - If two rows are equal, . - . - .
Adjoint (): The transpose of the cofactor matrix.
Standard Inverse Formula: .
Singular Matrix: . Non-singular if .
Cramer’s Rule: For non-singular , solution coordinates are where replaces column with vector .
Chapter 3: Finite Dimensional Vector Spaces
Definitions and Examples
Vector Space : A set with vector addition and scalar multiplication satisfying 8 axioms (associativity, commutativity of addition, zero vector, inverse, distributive laws).
Examples: - : n-tuples of real numbers. - : Polynomials of degree . - : matrices. - : Continuous functions on .
Subspace: A subset of is a subspace if for all vectors and scalars .
Linear Combination and Span
Linear Combination: .
Linear Span (): The set of all linear combinations of elements in . It is the smallest subspace containing .
Row Space: Spanned by rows of a matrix. .
Column Space / Range (): Spanned by columns of a matrix.
Linear Independence and Bases
Linearly Dependent: Non-zero scalars exist such that .
Linearly Independent: implies all .
Basis: A linearly independent set that spans .
Dimension (): Number of vectors in a basis of a finite-dimensional space.
Important Theorems: - Any two bases have the same number of vectors. - A linearly independent set can be extended to form a basis. - . - .
Ordered Bases and Coordinates
Ordered Basis: A basis where elements have a fixed sequence.
Coordinates (): Column vector of coefficients required to represent in ordered basis .
Change of Basis Matrix: Let , . Then where the column of is .
Chapter 4: Linear Transformations
Definitions and Basic Properties
Linear Transformation: A map such that .
Properties: - . - A linear transformation is determined by its value on a basis.
Inverse Transform: If is one-one and onto, exists and is linear.
Matrix of a Linear Transformation
Let . This matrix maps coordinates from basis of to basis of .
Identity: .
Rank-Nullity Theorem
Range (): Subspace of images .
Null Space / Kernel (): Subspace of vectors mapping to 0.
Rank (): dimension of .
Nullity (): dimension of .
Theorem: .
Invertibility: For , is one-one ↔ is onto ↔ is invertible.
Similarity
Matrices and are similar if .
Similar matrices represent the same linear transformation in different bases.
Theorem: If and , then where .
Chapter 5: Inner Product Spaces
Basic Definition and Norms
Inner Product (): Map satisfying conjugate symmetry, linearity in first component, and positive definiteness.
Norm / Length (): .
Cauchy-Schwartz Inequality: .
Angle (θ): Defined in real spaces by ext{cos}(θ) = rac{⟨u, v⟩}{‖u‑ ‖v‑}}.
Orthogonality: vectors are orthogonal if .
Pythagoras Theorem: If , then .
Orthonormal Sets and Gram-Schmidt
Orthonormal Set: vectors are unit vectors and mutually orthogonal.
Gram-Schmidt Process: Converts independent set to orthonormal set . - -
QR Decomposition: Any square matrix where is orthogonal/unitary and is upper triangular.
Orthogonal Projections
Subspace Orthogonal Complement (): Set of all vectors orthogonal to every vector in .
Orthogonal Projection (): Maps to its nearest vector in .
Matrix of Projection: If , the matrix is .
Self-Adjoint Operator: . Real symmetric matrices generate self-adjoint operators.
Chapter 6: Eigenvalues and Diagonalization
Definitions and Characteristics
Characteristic Equation: .
Eigenvalue (λ): A root of the characteristic equation.
Eigenvector (): A non-zero vector such that .
Trace and Determinant: and .
Cayley Hamilton Theorem: A matrix satisfies its own characteristic equation, .
Independency: Eigenvectors corresponding to distinct eigenvalues are linearly independent.
Diagonalization
A matrix is diagonalizable if there exists non-singular such that .
Equivalent to having linearly independent eigenvectors.
Unitary Diagonalizability: - Hermitian matrices have real eigenvalues and orthonormal eigenvectors. - Normal matrices () are unitarily diagonalizable.
Schur’s Lemma: Every square matrix is unitarily similar to an upper triangular matrix.
Sylvester’s Law of Inertia and Quadratic Forms
Quadratic Form: .
Hermitian Form: .
Sylvester’s Law: Any Hermitian form can be represented as where and are invariant.
Conic Sections: Uses eigenvalues of the associated quadratic form to classify ellipses, parabolas, and hyperbolas.
Part II: Ordinary Differential Equations
Chapter 7: First Order Differential Equations
Preliminaries
Ordinary Differential Equation (ODE): Relation .
Order: Highest derivative power present.
Solution: A function satisfying the equation on interval .
General Solution: A family of solutions involving one or more arbitrary constants.
Solution Methods
Separable Equations: . Solve by integration: .
Homogeneous Reducible: . Use substitution .
Exact Equations: is exact if . General solution is .
Integrating Factors (): Multiplier to make a non-exact equation exact.
First Order Linear Equations: . - Integrating factor: . - Solution: .
Bernoulli Equations: . Reduced to linear via .
Initial Value Problems (IVP)
Problem involving an ODE and initial conditions ().
Picard’s Successive Approximations: .
Existence and Uniqueness: Picard's Theorem guarantees a unique local solution if and its derivative are continuous.
Applications
Orthogonal Trajectories: Curves that intersect a given family at right angles. Found by replacing with in the differential equation of the family.
Euler’s Method: Numerical approximation defined by .
Chapter 8: Higher Order Linear Equations
Homogeneous Equations
Superposition Principle: Linear combinations of solutions are also solutions.
Wronskian (): Determinant of functions and their derivatives. .
Independence: Solutions are independent if and only if the Wronskian is non-zero.
Fundamental System: A set of linearly independent solutions for an -order equation.
Reduction of Order: If one solution is known, another is .
Constant Coefficients
Characteristic equation: .
Roots define solutions: - Distinct real: . - Repeated real: . - Complex conjugates (): .
Non-Homogeneous Equations
General solution: .
Method of Undetermined Coefficients: - Guess based on the forcing function . - Handles exponentials, sines/cosines, and polynomials.
Variation of Parameters: General method for finding using the Wronskian. - .
Chapter 9: Power Series Solutions
Basic Theory
Power Series: .
Radius of Convergence (): Defined by or the root test.
Ordinary Point: A point where coefficients are analytic. Guarantees power series solutions.
Legendre Equations and Polynomials
Equation: .
Legendre Polynomials (): Polynomial solutions satisfying .
Rodrigues’ Formula: .
Orthogonality: for .
Normalization: .
Generating Function: .
Part III: Laplace Transform
Chapter 10: Laplace Transform
Definitions and Properties
Definition: .
Linearity: .
Shifting Theorems: - s-Shifting: . - t-Shifting: .
Derivatives: .
Integrals: .
Convolution (): . Result is .
Applications
Solving Differential Equations: Transforms calculus operations into algebraic operations.
Unit Step Function (): 0 for t < a, 1 for .
Dirac Delta Function (δ(t)): Unit-impulse function with .
Part IV: Numerical Applications
Chapters 11-13: Interpolation, Differentiation, and Integration
Difference Operators
Forward ($Δ$): .
Backward ($∇$): .
Central ($δ$): .
Shift (): .
Averaging ($μ$): .
Interpolation Formulae
Newton’s Forward: Best used for values near the start of a table.
Newton’s Backward: Best used for values near the end of a table.
Lagrange’s Formula: Used for unequally spaced tabular points.
Sterling’s Formula: Used for values near the middle of a table.
Divided Differences: Recursive ratio .
Numerical Differentiation and Integration
Differentiation: Derived by differentiating interpolating polynomials.
Integration (Quadrature): - Trapezoidal Rule: Integrates using linear approximation. Error related to . - Simpson’s Rule: Integrates using quadratic approximation. Requires an even number of intervals. Form: .