Comprehensive Notes on Linear Algebra, Ordinary Differential Equations, and Numerical Methods

General Information and Authors
  • These notes are on Mathematics - 1021, authored by Peeyush Chandra, A. K. Lal, V. Raghavendra, and G. Santhanam.

  • Supported by a grant from MHRD.

Part I: Linear Algebra

Chapter 1: Matrices

Definition and Notation
  • Matrix: A rectangular array of numbers, mostly concerned with real numbers.

  • Rows and Columns: Horizontal arrays are rows; vertical arrays are columns.

  • Order: A matrix with mm rows and nn columns has order mimesnm imes n.

  • Representation: Denoted by A=[aij]A = [a_{ij}] where aija_{ij} is the entry at the ithi^{th} row and jthj^{th} column.

  • Vectors: A matrix with one column is a column vector; one row is a row vector.

  • Equality: Matrices A=[aij]A = [a_{ij}] and B=[bij]B = [b_{ij}] of the same order are equal if aij=bija_{ij} = b_{ij} for all i,ji, j.

Special Matrices
  • Zero Matrix: Each entry is zero, denoted by 00.

  • Square Matrix: Number of rows equals the number of columns (mimesmm imes m).

  • Diagonal Entries: In a square matrix of order nn, entries a11,a22,ext,anna_{11}, a_{22}, ext{…}, a_{nn} form the principal diagonal.

  • Diagonal Matrix: Entry aij=0a_{ij} = 0 for all ieqji eq j. Denoted by extdiag(d1,ext,dn)ext{diag}(d_1, ext{…}, d_n).

  • Scalar Matrix: A diagonal matrix where all diagonal entries are equal (di=dd_i = d).

  • Identity Matrix (InI_n): A square matrix where aii=1a_{ii} = 1 and aij=0a_{ij} = 0 for ieqji eq j.

  • Triangular Matrices:   - Upper Triangular: aij=0a_{ij} = 0 for i > j.   - Lower Triangular: aij=0a_{ij} = 0 for i < j.

Operations on Matrices
  • Transpose: For an mimesnm imes n matrix AA, the transpose AtA^t is an nimesmn imes m matrix where entry bij=ajib_{ij} = a_{ji}.   - Property: (At)t=A(A^t)^t = A.

  • Addition: Defined for matrices of the same order as C=[aij+bij]C = [a_{ij} + b_{ij}].   - Properties: Commutative (A+B=B+AA + B = B + A) and Associative ((A+B)+C=A+(B+C)(A + B) + C = A + (B + C)).

  • Scalar Multiplication: For kextinextRk ext{ in } ext{R}, kA=[kaij]kA = [ka_{ij}].

  • Additive Inverse: A=(1)A-A = (-1)A such that A+(A)=0A + (-A) = 0.

  • Matrix Multiplication:   - For AA (mimesnm imes n) and BB (nimesrn imes r), the product AB=CAB = C (mimesrm imes r) is defined by cij=ext<em>k=1na</em>ikbkjc_{ij} = ext{∑}<em>{k=1}^{n} a</em>{ik}b_{kj}.   - Product ABAB is defined only if the number of columns in AA equals the number of rows in BB.   - Commutativity: Generally ABeqBAAB eq BA. If AB=BAAB = BA, they commute.   - Associativity: (AB)C=A(BC)(AB)C = A(BC).   - Distributive law: A(B+C)=AB+ACA(B + C) = AB + AC.   - Multiplication by Identity: AIn=InA=AAI_n = I_n A = A.   - Multiplication by Diagonal Matrix: In DADA, the ithi^{th} row of AA is multiplied by did_i. In ADAD, the jthj^{th} column of AA is multiplied by djd_j.

  • Transpose Laws: (A+B)t=At+Bt(A + B)^t = A^t + B^t and (AB)t=BtAt(AB)^t = B^t A^t.

More Special Matrices
  • Symmetric: At=AA^t = A.

  • Skew-Symmetric: At=AA^t = -A. Diagonal entries must be zero.

  • Orthogonal: AAt=AtA=IAA^t = A^t A = I.

  • Nilpotent: There exists a positive integer kk such that Ak=0A^k = 0. The least such kk is the order of nilpotency.

  • Idempotent: A2=AA^2 = A.

  • Trace: For a square matrix, exttr(A)=a11+a22+ext+annext{tr}(A) = a_{11} + a_{22} + ext{…} + a_{nn}.   - Laws: exttr(A+B)=exttr(A)+exttr(B)ext{tr}(A + B) = ext{tr}(A) + ext{tr}(B) and exttr(AB)=exttr(BA)ext{tr}(AB) = ext{tr}(BA).

Block Matrices
  • Matrices can be decomposed into smaller blocks (submatrices).

  • If A=[PextQ]A = [P ext{ } Q] and B = egin{pmatrix} H \ K egin{pmatrix}, then AB=PH+QKAB = PH + QK.

  • Block addition and multiplication require compatible partitions.

Matrices over Complex Numbers
  • Conjugate (Aˉ): Replacing entries aija_{ij} with their complex conjugate aˉijā_{ij}.

  • Conjugate Transpose (A<em>A^<em>): Transpose of the conjugate matrix, aji</em>=aˉjia_{ji}^</em> = ā_{ji}.

  • Hermitian: A=AA^* = A.

  • Skew-Hermitian: A=AA^* = -A.

  • Unitary: A<em>A=AA</em>=IA^<em>A = AA^</em> = I.

  • Normal: AA=AAAA^* = A^*A.

Chapter 2: Linear System of Equations

Introduction to Linear Systems
  • A linear system of mm equations in nn unknowns is Ax=bAx = b.

  • AA is the coefficient matrix; [Aextb][A ext{ } b] is the augmented matrix.

  • Homogeneous System: Ax=0Ax = 0. Always has the trivial solution x=0x = 0.

  • Solution Set: Can have a unique solution, infinite solutions, or no solution.

Elementary Operations and Equivalent Systems
  • Elementary Row Operations:   - RijR_{ij}: Interchange of rows ii and jj.   - Rk(c)R_k(c): Multiply row kk by non-zero constant cc.   - Rkj(c)R_{kj}(c): Replace row kk by (row kk + cimesc imes row jj).

  • Row-Equivalent: Two matrices are row-equivalent if one is obtained from the other via elementary row operations.

  • Lemma: Equivalent linear systems have the same set of solutions.

Gauss Elimination and Echelon Forms
  • Gauss Elimination (Forward Elimination): Reducing the augmented matrix to an upper triangular form.

  • Row Reduced Form:   - First non-zero entry in each row is 1 (Leading Term).   - Column containing the leading 1 has zeros elsewhere.

  • Row Reduced Echelon Form (RREF):   - Satisfies row reduced form.   - Zero rows are at the bottom.   - Leading 1s appear in a staircase pattern (left to right).

  • Variables:   - Basic Variables: Variables corresponding to leading columns.   - Free Variables: Variables not corresponding to leading columns.

  • Gauss-Jordan Elimination: Includes forward elimination and back substitution to reach RREF.

Elementary Matrices
  • Obtained by applying a single elementary row operation to the Identity matrix.

  • Multiplying AA on the left by an elementary matrix performs the corresponding operation on AA.

  • Multiplying on the right performs column transformations.

Rank of a Matrix
  • Row-rank: Number of non-zero rows in the row reduced form.

  • Rank: Row-rank equals Column-rank. Denoted extrank(A)ext{rank}(A).

  • Theorem: If extrank(A)=rext{rank}(A) = r, there exist elementary matrices such that PAQ = egin{pmatrix} I_r & 0 \ 0 & 0 egin{pmatrix}.

Consistency Theorem (Ax = b)
  • Let extrank(A)=rext{rank}(A) = r and extrank([Aextb])=raext{rank}([A ext{ } b]) = r_a.

  • Infinite Solutions: If r = r_a < n. Solution space format: extu0+k1u1+ext+knrunrextext{{}u_0 + k_1u_1 + ext{…} + k_{n-r}u_{n-r} ext{}}.

  • Unique Solution: If r=ra=nr = r_a = n.

  • No Solution (Inconsistent): If r < r_a.

  • Homogeneous Systems: Ax=0Ax = 0 has non-trivial solutions if and only if ext{rank}(A) < n.

Invertible Matrices
  • Definition: Square matrix AA is invertible if there exists BB such that AB=BA=IAB = BA = I.

  • Uniqueness: The inverse A1A^{-1} is unique.

  • Properties:   - (A1)1=A(A^{-1})^{-1} = A.   - (AB)1=B1A1(AB)^{-1} = B^{-1} A^{-1}.   - (At)1=(A1)t(A^t)^{-1} = (A^{-1})^t.

  • Equivalent conditions for Invertibility:   - AA is invertible.   - extrank(A)=next{rank}(A) = n (Full rank).   - RREF of AA is InI_n.   - AA is a product of elementary matrices.   - Ax=0Ax = 0 has only the trivial solution.   - Ax=bAx = b is consistent for every bb.

  • Calculating Inverse: Apply Gauss-Jordan to [AextIn][A ext{ } I_n] to get [InextA1][I_n ext{ } A^{-1}].

Determinants
  • Definition: Inductively defined for square matrices.

  • Notation: extdet(A)=ext<em>j=1n(1)1+ja</em>1jextdet(A(1j))ext{det}(A) = ext{∑}<em>{j=1}^{n} (-1)^{1+j} a</em>{1j} ext{det}(A(1|j)) where A(1j)A(1|j) follows deleting row 1 and column jj.

  • Minor (AijA_{ij}): Determinant of submatrix obtained by deleting row ii and column jj.

  • Cofactor (CijC_{ij}): (1)i+jAij(-1)^{i+j} A_{ij}.

  • Properties:   - Interchanging two rows changes the sign.   - Multiplying a row by cc multiplies the determinant by cc.   - If two rows are equal, extdet=0ext{det} = 0.   - extdet(AB)=extdet(A)extdet(B)ext{det}(AB) = ext{det}(A) ext{det}(B).   - extdet(At)=extdet(A)ext{det}(A^t) = ext{det}(A).

  • Adjoint (extAdj(A)ext{Adj}(A)): The transpose of the cofactor matrix.

  • Standard Inverse Formula: A1=rac1extdet(A)extAdj(A)A^{-1} = rac{1}{ ext{det}(A)} ext{Adj}(A).

  • Singular Matrix: extdet(A)=0ext{det}(A) = 0. Non-singular if extdet(A)eq0ext{det}(A) eq 0.

  • Cramer’s Rule: For non-singular AA, solution coordinates are xj=racextdet(Aj)extdet(A)x_j = rac{ ext{det}(A_j)}{ ext{det}(A)} where AjA_j replaces column jj with vector bb.

Chapter 3: Finite Dimensional Vector Spaces

Definitions and Examples
  • Vector Space V(F)V(F): A set with vector addition and scalar multiplication satisfying 8 axioms (associativity, commutativity of addition, zero vector, inverse, distributive laws).

  • Examples:   - extRnext{R}^n: n-tuples of real numbers.   - extPn(extR)ext{P}_n( ext{R}): Polynomials of degree extnext{≤ } n.   - extMn(extR)ext{M}_n( ext{R}): nimesnn imes n matrices.   - extC([a,b])ext{C}([a, b]): Continuous functions on [a,b][a, b].

  • Subspace: A subset SS of VV is a subspace if extαu+extβvextinSext{α}u + ext{β}v ext{ in } S for all vectors u,vextinSu, v ext{ in } S and scalars extα,βext{α, β}.

Linear Combination and Span
  • Linear Combination: extα1u1+ext+extαnunext{α}_1u_1 + ext{…} + ext{α}_nu_n.

  • Linear Span (L(S)L(S)): The set of all linear combinations of elements in SS. It is the smallest subspace containing SS.

  • Row Space: Spanned by rows of a matrix. extdim(extRowSpace)=extrowrankext{dim}( ext{RowSpace}) = ext{row rank}.

  • Column Space / Range (extIm(A)ext{Im}(A)): Spanned by columns of a matrix.

Linear Independence and Bases
  • Linearly Dependent: Non-zero scalars exist such that extαiui=0ext{∑ α}_iu_i = 0.

  • Linearly Independent: extαiui=0ext{∑ α}_iu_i = 0 implies all extαi=0ext{α}_i = 0.

  • Basis: A linearly independent set that spans VV.

  • Dimension (extdim(V)ext{dim}(V)): Number of vectors in a basis of a finite-dimensional space.

  • Important Theorems:   - Any two bases have the same number of vectors.   - A linearly independent set can be extended to form a basis.   - extdim(M)+extdim(N)=extdim(M+N)+extdim(MextN)ext{dim}(M) + ext{dim}(N) = ext{dim}(M + N) + ext{dim}(M ext{ ∩ } N).   - extRowRank=extColumnRankext{Row Rank} = ext{Column Rank}.

Ordered Bases and Coordinates
  • Ordered Basis: A basis where elements have a fixed sequence.

  • Coordinates ([v]B[v]_B): Column vector of coefficients required to represent vv in ordered basis BB.

  • Change of Basis Matrix: Let B1=(u1,ext,un)B_1 = (u_1, ext{…}, u_n), B2=(v1,ext,vn)B_2 = (v_1, ext{…}, v_n). Then [v]<em>B1=A[v]</em>B2[v]<em>{B1} = A[v]</em>{B2} where the ithi^{th} column of AA is [vi]B1[v_i]_{B1}.

Chapter 4: Linear Transformations

Definitions and Basic Properties
  • Linear Transformation: A map T:VWT: V → W such that T(extαu+extβv)=extαT(u)+extβT(v)T( ext{α}u + ext{β}v) = ext{α}T(u) + ext{β}T(v).

  • Properties:   - T(0)=0T(0) = 0.   - A linear transformation is determined by its value on a basis.

  • Inverse Transform: If TT is one-one and onto, T1T^{-1} exists and is linear.

Matrix of a Linear Transformation
  • Let A=T[B1,B2]A = T[B_1, B_2]. This matrix maps coordinates from basis B1B_1 of VV to basis B2B_2 of WW.

  • Identity: [T(x)]<em>B2=A[x]</em>B1[T(x)]<em>{B2} = A [x]</em>{B1}.

Rank-Nullity Theorem
  • Range (R(T)R(T)): Subspace of images extT(v)extext{{}T(v) ext{}}.

  • Null Space / Kernel (N(T)N(T)): Subspace of vectors mapping to 0.

  • Rank (ρ(T)ρ(T)): dimension of R(T)R(T).

  • Nullity (ν(T)ν(T)): dimension of N(T)N(T).

  • Theorem: ρ(T)+ν(T)=extdim(V)ρ(T) + ν(T) = ext{dim}(V).

  • Invertibility: For T:VVT: V → V, TT is one-one ↔ TT is onto ↔ TT is invertible.

Similarity
  • Matrices BB and CC are similar if B=PCP1B = PCP^{-1}.

  • Similar matrices represent the same linear transformation in different bases.

  • Theorem: If B=T[B1,B1]B = T[B_1, B_1] and C=T[B2,B2]C = T[B_2, B_2], then C=A1BAC = A^{-1}BA where A=I[B2,B1]A = I[B_2, B_1].

Chapter 5: Inner Product Spaces

Basic Definition and Norms
  • Inner Product (u,v⟨u, v⟩): Map VimesVFV imes V → F satisfying conjugate symmetry, linearity in first component, and positive definiteness.

  • Norm / Length (u‖u‑): u,u√⟨u, u⟩.

  • Cauchy-Schwartz Inequality: u,vu‑‖v|⟨u, v⟩| ≤ ‖u‑ ‖v‑.

  • Angle (θ): Defined in real spaces by ext{cos}(θ) = rac{⟨u, v⟩}{‖u‑ ‖v‑}}.

  • Orthogonality: vectors are orthogonal if u,v=0⟨u, v⟩ = 0.

  • Pythagoras Theorem: If u,v=0⟨u, v⟩ = 0, then uv2=u2+v2‖u - v‑^2 = ‖u‑^2 + ‖v‑^2.

Orthonormal Sets and Gram-Schmidt
  • Orthonormal Set: vectors are unit vectors and mutually orthogonal.

  • Gram-Schmidt Process: Converts independent set extu1,u2,ext,unextext{{}u_1, u_2, ext{…}, u_n ext{}} to orthonormal set extv1,ext,vnextext{{}v_1, ext{…}, v_n ext{}}.   - wi=uiextj=1i1ui,vjvjw_i = u_i - ext{∑}_{j=1}^{i-1} ⟨u_i, v_j⟩v_j   - vi=racwiwiv_i = rac{w_i}{‖w_i‑}

  • QR Decomposition: Any square matrix A=QRA = QR where QQ is orthogonal/unitary and RR is upper triangular.

Orthogonal Projections
  • Subspace Orthogonal Complement (WW^⊥): Set of all vectors orthogonal to every vector in WW.

  • Orthogonal Projection (PWP_W): Maps vv to its nearest vector in WW.

  • Matrix of Projection: If A=[v1,ext,vk]A = [v_1, ext{…}, v_k], the matrix is AAtA A^t.

  • Self-Adjoint Operator: T(u),v=u,T(v)⟨T(u), v⟩ = ⟨u, T(v)⟩. Real symmetric matrices generate self-adjoint operators.

Chapter 6: Eigenvalues and Diagonalization

Definitions and Characteristics
  • Characteristic Equation: extdet(AextλI)=0ext{det}(A - ext{λ}I) = 0.

  • Eigenvalue (λ): A root of the characteristic equation.

  • Eigenvector (xx): A non-zero vector such that Ax=extλxAx = ext{λ}x.

  • Trace and Determinant: exttr(A)=extλiext{tr}(A) = ext{∑ λ}_i and extdet(A)=λiext{det}(A) = ∏ λ_i.

  • Cayley Hamilton Theorem: A matrix satisfies its own characteristic equation, p(A)=0p(A) = 0.

  • Independency: Eigenvectors corresponding to distinct eigenvalues are linearly independent.

Diagonalization
  • A matrix is diagonalizable if there exists non-singular PP such that P1AP=DP^{-1} AP = D.

  • Equivalent to having nn linearly independent eigenvectors.

  • Unitary Diagonalizability:   - Hermitian matrices have real eigenvalues and orthonormal eigenvectors.   - Normal matrices (AA=AAAA^* = A^*A) are unitarily diagonalizable.

  • Schur’s Lemma: Every square matrix is unitarily similar to an upper triangular matrix.

Sylvester’s Law of Inertia and Quadratic Forms
  • Quadratic Form: Q(x)=xtAxQ(x) = x^t Ax.

  • Hermitian Form: H(x)=xAxH(x) = x^* Ax.

  • Sylvester’s Law: Any Hermitian form can be represented as H(x)=ext<em>i=1pyi2ext</em>j=p+1ryj2H(x) = ext{∑}<em>{i=1}^p |y_i|^2 - ext{∑}</em>{j=p+1}^r |y_j|^2 where pp and rr are invariant.

  • Conic Sections: Uses eigenvalues of the associated quadratic form to classify ellipses, parabolas, and hyperbolas.

Part II: Ordinary Differential Equations

Chapter 7: First Order Differential Equations

Preliminaries
  • Ordinary Differential Equation (ODE): Relation f(x,y,y,ext,y(n))=0f(x, y, y', ext{…}, y^{(n)}) = 0.

  • Order: Highest derivative power present.

  • Solution: A function y=f(x)y = f(x) satisfying the equation on interval II.

  • General Solution: A family of solutions involving one or more arbitrary constants.

Solution Methods
  • Separable Equations: y=g(y)h(x)y' = g(y)h(x). Solve by integration: racdyg(y)=h(x)dx∫ rac{dy}{g(y)} = ∫ h(x) dx.

  • Homogeneous Reducible: y=g(y/x)y' = g(y/x). Use substitution y=xu(x)y = xu(x).

  • Exact Equations: M(x,y)dx+N(x,y)dy=0M(x, y) dx + N(x, y) dy = 0 is exact if racMy=racNxrac{∂M}{∂y} = rac{∂N}{∂x}. General solution is G(x,y)=cG(x, y) = c.

  • Integrating Factors (QQ): Multiplier to make a non-exact equation exact.

  • First Order Linear Equations: y+p(x)y=q(x)y' + p(x)y = q(x).   - Integrating factor: P(x)=ep(x)dxP(x) = e^{∫ p(x) dx}.   - Solution: y=P(x)1[P(x)q(x)dx+c]y = P(x)^{-1} [∫ P(x)q(x) dx + c].

  • Bernoulli Equations: y+p(x)y=q(x)yay' + p(x)y = q(x)y^a. Reduced to linear via u=y1au = y^{1-a}.

Initial Value Problems (IVP)
  • Problem involving an ODE and initial conditions (y(x0)=y0y(x_0) = y_0).

  • Picard’s Successive Approximations: yn(x)=y0+<em>x0xf(s,y</em>n1(s))dsy_n(x) = y_0 + ∫<em>{x_0}^x f(s, y</em>{n-1}(s)) ds.

  • Existence and Uniqueness: Picard's Theorem guarantees a unique local solution if ff and its derivative are continuous.

Applications
  • Orthogonal Trajectories: Curves that intersect a given family at right angles. Found by replacing yy' with 1/y-1/y' in the differential equation of the family.

  • Euler’s Method: Numerical approximation defined by yk+1yk+hf(xk,yk)y_{k+1} ≈ y_k + hf(x_k, y_k).

Chapter 8: Higher Order Linear Equations

Homogeneous Equations
  • Superposition Principle: Linear combinations of solutions are also solutions.

  • Wronskian (WW): Determinant of functions and their derivatives. W(y1,y2)=y1y2y1y2W(y_1, y_2) = y_1y'_2 - y'_1y_2.

  • Independence: Solutions are independent if and only if the Wronskian is non-zero.

  • Fundamental System: A set of nn linearly independent solutions for an nthn^{th}-order equation.

  • Reduction of Order: If one solution y1y_1 is known, another is y2=u(x)y1y_2 = u(x)y_1.

Constant Coefficients
  • Characteristic equation: p(extλ)=0p( ext{λ}) = 0.

  • Roots define solutions:   - Distinct real: eextλxe^{ ext{λ}x}.   - Repeated real: eextλx,xeextλx,exte^{ ext{λ}x}, xe^{ ext{λ}x}, ext{…}.   - Complex conjugates (extα±iextβext{α} ± i ext{β}): eextαxextcos(extβx),eextαxextsin(extβx)e^{ ext{α}x} ext{cos}( ext{β}x), e^{ ext{α}x} ext{sin}( ext{β}x).

Non-Homogeneous Equations
  • General solution: y=yh+ypy = y_h + y_p.

  • Method of Undetermined Coefficients:   - Guess ypy_p based on the forcing function f(x)f(x).   - Handles exponentials, sines/cosines, and polynomials.

  • Variation of Parameters: General method for finding ypy_p using the Wronskian.   - yp=y1racy2fWdx+y2racy1fWdxy_p = -y_1 ∫ rac{y_2 f}{W} dx + y_2 ∫ rac{y_1 f}{W} dx.

Chapter 9: Power Series Solutions

Basic Theory
  • Power Series: an(xx0)n∑ a_n(x - x_0)^n.

  • Radius of Convergence (RR): Defined by extlimracan+1anext{lim} | rac{a_{n+1}}{a_n}| or the root test.

  • Ordinary Point: A point x0x_0 where coefficients are analytic. Guarantees power series solutions.

Legendre Equations and Polynomials
  • Equation: (1x2)y2xy+n(n+1)y=0(1 - x^2)y'' - 2xy' + n(n + 1)y = 0.

  • Legendre Polynomials (Pn(x)P_n(x)): Polynomial solutions satisfying Pn(1)=1P_n(1) = 1.

  • Rodrigues’ Formula: Pn(x)=rac12nn!racdndxn(x21)nP_n(x) = rac{1}{2^n n!} rac{d^n}{dx^n} (x^2 - 1)^n.

  • Orthogonality: 11Pn(x)Pm(x)dx=0∫_{-1}^1 P_n(x) P_m(x) dx = 0 for neqmn eq m.

  • Normalization: 11Pn2(x)dx=rac22n+1∫_{-1}^1 P_n^2(x) dx = rac{2}{2n + 1}.

  • Generating Function: (12xt+t2)1/2=Pn(x)tn(1 - 2xt + t^2)^{-1/2} = ∑ P_n(x)t^n.

Part III: Laplace Transform

Chapter 10: Laplace Transform

Definitions and Properties
  • Definition: L(f(t))=F(s)=0estf(t)dtL(f(t)) = F(s) = ∫_0^∞ e^{-st} f(t) dt.

  • Linearity: L(af+bg)=aL(f)+bL(g)L(af + bg) = aL(f) + bL(g).

  • Shifting Theorems:   - s-Shifting: L(eatf(t))=F(sa)L(e^{at}f(t)) = F(s - a).   - t-Shifting: L(Ua(t)f(ta))=easF(s)L(U_a(t)f(t - a)) = e^{-as}F(s).

  • Derivatives: L(f)=sF(s)f(0)L(f') = sF(s) - f(0).

  • Integrals: L(0tf(τ)dτ)=F(s)/sL(∫_0^t f(τ) dτ) = F(s)/s.

  • Convolution (fgf ⋆ g): 0tf(τ)g(tτ)dτ∫_0^t f(τ)g(t - τ) dτ. Result is L(fg)=F(s)G(s)L(f ⋆ g) = F(s)G(s).

Applications
  • Solving Differential Equations: Transforms calculus operations into algebraic operations.

  • Unit Step Function (Ua(t)U_a(t)): 0 for t < a, 1 for tat ≥ a.

  • Dirac Delta Function (δ(t)): Unit-impulse function with L(δ(ta))=easL(δ(t - a)) = e^{-as}.

Part IV: Numerical Applications

Chapters 11-13: Interpolation, Differentiation, and Integration

Difference Operators
  • Forward ($Δ$): Δyk=yk+1ykΔy_k = y_{k+1} - y_k.

  • Backward ($∇$): yk=ykyk1∇y_k = y_k - y_{k-1}.

  • Central ($δ$): δyk=yk+1/2yk1/2δy_k = y_{k+1/2} - y_{k-1/2}.

  • Shift (EE): Eyk=yk+1Ey_k = y_{k+1}.

  • Averaging ($μ$): μyk=rac12(yk+1/2+yk1/2)μy_k = rac{1}{2}(y_{k+1/2} + y_{k-1/2}).

Interpolation Formulae
  • Newton’s Forward: Best used for values near the start of a table.

  • Newton’s Backward: Best used for values near the end of a table.

  • Lagrange’s Formula: Used for unequally spaced tabular points.

  • Sterling’s Formula: Used for values near the middle of a table.

  • Divided Differences: Recursive ratio δ[xi,xj]=racf(xi)f(xj)xixjδ[x_i, x_j] = rac{f(x_i) - f(x_j)}{x_i - x_j}.

Numerical Differentiation and Integration
  • Differentiation: Derived by differentiating interpolating polynomials.

  • Integration (Quadrature):   - Trapezoidal Rule: Integrates using linear approximation. Error related to Δ2yΔ^2 y.   - Simpson’s Rule: Integrates using quadratic approximation. Requires an even number of intervals. Form: rach3[y0+yn+4(yodd)+2(yeven)]rac{h}{3}[y_0 + y_n + 4(∑ y_{odd}) + 2(∑ y_{even})].