Applied Mathematics: Cartesian Tensors and Differential Equations Study Guide

FOUNDATIONS OF CARTESIAN TENSORS

  • Fundamental Concept and Definition

    • A tensor is a generalization of scalars and vectors, consisting of a group of numbers following specific transformation rules and possessing physical implications.

    • Physical laws are often independent of the coordinate system; hence, tensor forms are ideal for expressing these laws (e.g., Newton's Second Law f=ma\mathbf{f} = m\mathbf{a}).

    • Invariance vs. Components: Vectors and tensors remain invariant (unchanged) upon a change of basis, but their individual components depend on the chosen coordinate system.

  • Classification by Rank

    • The rank (or order) $p$ determines the number of components. In an $N$-dimensional space, a tensor of order $p$ has $N^p$ components.

    • In 3D Euclidean space:

      • 0th order: Scalar (1 component).

      • 1st order: Vector (31=33^1 = 3 components).

      • 2nd order: Tensor (32=93^2 = 9 components).

  • Index (Indicial) Notation and Einstein Summation Convention

    • Developed to simplify complex tensor equations.

    • Free Index: Appears once in a term (e.g., ii in fi=feif_i = \mathbf{f} \cdot \mathbf{e}_i). It implies the equation holds for all possible values of the index (1, 2, 3).

    • Dummy (Summation) Index: Appears exactly twice in a term. Summation is implied over the range of the index without a summation symbol (\sum).

    • Einstein Convention: f=fiei\mathbf{f} = f_i \mathbf{e}_i means f1e1+f2e2+f3e3f_1 \mathbf{e}_1 + f_2 \mathbf{e}_2 + f_3 \mathbf{e}_3.

    • Substitution Rule with Kronecker Delta: uiδij=uju_i \delta_{ij} = u_j. If an index of δ\delta is a dummy index, replace it in the rest of the term with the other index of δ\delta and remove δ\delta.

  • Orthonormal Base Vectors and Special Symbols

    • Base Vectors: (e<em>1,e2,e3)(\mathbf{e}<em>1, \mathbf{e}_2, \mathbf{e}_3) are mutually perpendicular unit vectors (eiej=δ</em>ij\mathbf{e}_i \cdot \mathbf{e}_j = \delta</em>{ij}).

    • Kronecker Delta (δij\delta_{ij}):

      • δij=1\delta_{ij} = 1 if i=ji = j

      • δij=0\delta_{ij} = 0 if iji \neq j

    • Permutation Symbol (Levi-Civita, eijke_{ijk}):

      • eijk=1e_{ijk} = 1 for cyclic permutations (1,2,3), (2,3,1), (3,1,2).

      • eijk=1e_{ijk} = -1 for anti-cyclic permutations (2,1,3), (1,3,2), (3,2,1).

      • eijk=0e_{ijk} = 0 if any two indices are equal.

    • Vector Operations in Index Notation:

      • Dot Product: ab=aibi\mathbf{a} \cdot \mathbf{b} = a_i b_i

      • Cross Product: (a×b)<em>k=e</em>ijkaibj(\mathbf{a} \times \mathbf{b})<em>k = e</em>{ijk} a_i b_j

      • Scalar Triple Product: a(b×c)=eijkaibjck\mathbf{a} \cdot (\mathbf{b} \times \mathbf{c}) = e_{ijk} a_i b_j c_k

TRANSFORMATION RULES AND TENSOR ALGEBRA

  • Transformation of Vectors and Bases

    • Let ij=eiej\ell_{ij} = \mathbf{e}_i \cdot \mathbf{e}'_j be the direction cosines (the cosine of the angle between old axis xix_i and new axis xjx'_j).

    • New components in terms of old: f<em>j=</em>ijfif'<em>j = \ell</em>{ij} f_i

    • Old components in terms of new: fi=ijfjf_i = \ell_{ij} f'_j

    • The transformation matrix [L][\mathbf{L}] is orthogonal ([L][L]T=[I][\mathbf{L}][\mathbf{L}]^T = [\mathbf{I}]).

  • Second-Order Tensors as Linear Mappings

    • A 2nd-order tensor T\mathbf{T} is a linear mapping that takes a vector u\mathbf{u} to another vector w=T(u)\mathbf{w} = \mathbf{T}(\mathbf{u}).

    • Components are defined by T(e<em>j)=T</em>ije<em>i\mathbf{T}(\mathbf{e}<em>j) = T</em>{ij} \mathbf{e}<em>i or T</em>ij=eiT(ej)T</em>{ij} = \mathbf{e}_i \cdot \mathbf{T}(\mathbf{e}_j).

    • Dyads and Dyadics: A dyad (tensor product) ab\mathbf{a}\mathbf{b} (or ab\mathbf{a} \otimes \mathbf{b}) is defined such that (ab)v=a(bv)(\mathbf{a}\mathbf{b}) \cdot \mathbf{v} = \mathbf{a} (\mathbf{b} \cdot \mathbf{v}).

  • General Tensor Operations

    • Contraction: Summing over two indices reducing rank by 2 (e.g., Tii=T11+T22+T33T_{ii} = T_{11} + T_{22} + T_{33}).

    • Tensor Product: rank mm tensor rank n\text{rank } n tensor = rank (m+n)(m+n) tensor.

    • Symmetry: T\mathbf{T} is symmetric if Tij=TjiT_{ij} = T_{ji}. It is skew-symmetric if Tij=TjiT_{ij} = -T_{ji}.

    • Isotropic Tensors: Components remain the same in any Cartesian basis.

      • 0th order: Any scalar.

      • 1st order: Only the null vector.

      • 2nd order: Must be in the form aδija\delta_{ij}.

      • 4th order: Cijkl=λδijδkl+μδikδjl+νδilδjkC_{ijkl} = \lambda \delta_{ij}\delta_{kl} + \mu \delta_{ik}\delta_{jl} + \nu \delta_{il}\delta_{jk}.

EIGENVALUE PROBLEMS AND INVARIANTS

  • Principal Values and Axes

    • For a symmetric tensor S\mathbf{S}, the eigenvalue problem is Sijvj=λviS_{ij} v_j = \lambda v_i.

    • The characteristic equation is det(Sijλδij)=0\det(S_{ij} - \lambda \delta_{ij}) = 0, which expands to:

    • λ3+I1λ2I2λ+I3=0-\lambda^3 + I_1 \lambda^2 - I_2 \lambda + I_3 = 0

    • Principal Invariants:

      • I1=tr(S)=Sii=λ1+λ2+λ3I_1 = \text{tr}(\mathbf{S}) = S_{ii} = \lambda_1 + \lambda_2 + \lambda_3

      • I2=12(SiiSjjSijSji)=λ1λ2+λ2λ3+λ3λ1I_2 = \frac{1}{2}(S_{ii}S_{jj} - S_{ij}S_{ji}) = \lambda_1\lambda_2 + \lambda_2\lambda_3 + \lambda_3\lambda_1

      • I3=det(S)=λ1λ2λ3I_3 = \det(\mathbf{S}) = \lambda_1\lambda_2\lambda_3

  • Properties for Real Symmetric Tensors

    • All eigenvalues are real.

    • Eigenvectors corresponding to distinct eigenvalues are orthogonal.

    • There always exists an orthonormal set of eigenvectors (principal axes) in which the matrix representation is diagonal.

ORDINARY DIFFERENTIAL EQUATIONS (ODE)

  • Existence and Uniqueness for Initial Value Problems (IVP)

    • For y=f(x,y)y' = f(x, y) with y(a)=by(a) = b:

    • Lipschitz Condition: f(x,y2)f(x,y1)Cy2y1|f(x, y_2) - f(x, y_1)| \leq C|y_2 - y_1|. This is a smoothness condition stronger than continuity but weaker than continuous differentiability.

    • Existence & Uniqueness Theorem: If ff is continuous and Lipschitz continuous in a neighborhood of (a,b)(a, b), a unique solution exists locally.

  • Linear Systems with Constant Coefficients

    • System form: y=Ay\mathbf{y}' = \mathbf{A}\mathbf{y}.

    • Solution using Matrix Exponential: y(x)=exAy(0)\mathbf{y}(x) = e^{x\mathbf{A}}\mathbf{y}(0).

    • Definition: exA=k=0(xA)kk!e^{x\mathbf{A}} = \sum_{k=0}^{\infty} \frac{(x\mathbf{A})^k}{k!}.

    • Evaluation Strategies:

      • Diagonalizable A: exA=PexΛP1e^{x\mathbf{A}} = \mathbf{P} e^{x\mathbf{\Lambda}} \mathbf{P}^{-1}, where P\mathbf{P} contains eigenvectors.

      • Non-diagonalizable A: Use Jordan Canonical Forms or the Cayley-Hamilton Theorem method (exA=R(A)e^{x\mathbf{A}} = R(\mathbf{A})).

    • Non-homogeneous Systems: y=Ay+f(x)y' = \mathbf{A}y + f(x).

      • Solution: y(x)=exA0xeηAf(η)dη+exAy(0)\mathbf{y}(x) = e^{x\mathbf{A}} \int_0^x e^{-\eta \mathbf{A}} f(\eta) d\eta + e^{x\mathbf{A}}\mathbf{y}(0).

GREEN'S FUNCTIONS FOR DEs

  • Point Sources and the Delta Function

    • The Dirac delta function δ(xξ)\delta(x - \xi) represents a unit concentrated source at x=ξx = \xi.

    • Sifting property: abϕ(x)δ(xξ)dx=ϕ(ξ)\int_a^b \phi(x) \delta(x - \xi) dx = \phi(\xi).

  • Constructing Green's Functions (BVP)

    • For the problem Lu=f(x)Lu = f(x), the Green's function G(x;ξ)G(x; \xi) satisfies LG(x;ξ)=δ(xξ)LG(x; \xi) = \delta(x - \xi) with homogeneous boundary conditions.

    • Continuity and Jump Conditions (for 2nd order uu''):

      • GG is continuous at x=ξx = \xi.

      • The derivative GG' has a jump: G(x;ξ)<em>ξ+G(x;ξ)</em>ξ=1a2(ξ)G'(x; \xi)|<em>{\xi^+} - G'(x; \xi)|</em>{\xi^-} = \frac{1}{a_2(\xi)}.

    • Integral Representation: The solution to the original problem is u(x)=G(x;ξ)f(ξ)dξu(x) = \int G(x; \xi) f(\xi) d\xi.

  • Adjoint Operators and Symmetry

    • A linear operator LL has an adjoint LL^* defined via integration by parts.

    • For Lu=a2u+a1u+a0uLu = a_2u'' + a_1u' + a_0u, the adjoint is Lv=(a2v)(a1v)+a0vL^*v = (a_2v)'' - (a_1v)' + a_0v.

    • Formal Self-Adjointness: Requires a2=a1a'_2 = a_1, allowing the operator to be written as ddx[p(x)u]+q(x)u\frac{d}{dx}[p(x)u'] + q(x)u.

    • Reciprocity: For self-adjoint problems, G(x;ξ)=G(ξ;x)G(x; \xi) = G(\xi; x).

  • Alternative Theorem and Modified Green's Functions

    • If the homogeneous problem LuH=0Lu_H = 0 has nontrivial solutions, the standard Green's function does not exist.

    • Consitency Condition: The problem has a solution only if the source f(x)f(x) is orthogonal to the solutions of the adjoint homogeneous problem.

    • Modified Green's Function (GMG_M): Satisfies LGM=δ(xξ)ψ<em>(x)ψ</em>(ξ)LG_M = \delta(x - \xi) - \psi^<em>(x)\psi^</em>(\xi), where ψ\psi^* is a normalized solution to the adjoint homogeneous problem.

PARTIAL DIFFERENTIAL EQUATIONS (PDE)

  • Classification of 2nd Order Linear PDEs (in 2 variables x,yx, y)

    • Based on the general form auxx+2buxy+cuyy+=0au_{xx} + 2bu_{xy} + cu_{yy} + \dots = 0:

    • Hyperbolic (b^2 - ac > 0): Wave phenomena; describes propagation along characteristics. Example: Wave Equation.

    • Elliptic (b^2 - ac < 0): Equilibrium phenomena; no real characteristics. Example: Laplace/Poisson Equations.

    • Parabolic (b2ac=0b^2 - ac = 0): Diffusion phenomena; one family of characteristics. Example: Heat Equation.

  • Laplace and Poisson Equations

    • 2u=0\nabla^2 u = 0 (Laplace), 2u=g(x)\nabla^2 u = g(\mathbf{x}) (Poisson).

    • Mean-Value Theorem: The value at a point is the average of its values on a surrounding circle or sphere.

    • Maximum-Minimum Principle: Solutions (harmonic functions) cannot have local extrema inside the domain.

  • Uniqueness Proofs (Energy Method)

    • For the Heat Equation: Consider the integral 12u2dV\frac{1}{2} \int u^2 dV. If the boundary and initial data are zero, this "energy" is non-increasing and must be zero for all time, proving uniqueness.

  • Techniques for Solving Green's Functions in PDEs

    • Method of Images: Superimposing free-space solutions with virtual sources outside the domain to satisfy boundary conditions (works for simple geometries like symmetric planes or circles).

    • Full Eigenfunction Expansion: Represents Green's function as a series G(x;ξ)=ϕn(x)ϕn(ξ)λnG(x; \xi) = \sum \frac{\phi_n(x)\phi_n(\xi)}{-\lambda_n}.

    • Fourier/Laplace Transforms: Effective for unbounded or semi-infinite domains.

  • A tensor is a generalization of scalars and vectors, consisting of a group of numbers that follow specific transformation rules and possess physical implications.

  • Physical laws are often independent of the coordinate system; hence, tensor forms are ideal for expressing these laws (e.g., Newton's Second Law: f=ma\mathbf{f} = m\mathbf{a}).

  • Invariance vs. Components: Vectors and tensors remain invariant (unchanged) upon a change of basis, but their individual components depend on the chosen coordinate system.

  • Classification by Rank: The rank (or order) pp determines the number of components. In an NN-dimensional space, a tensor of order pp has NpN^p components.

    • In 3D Euclidean space:

    • 0th order: Scalar (1 component).

    • 1st order: Vector (31=33^1 = 3 components).

    • 2nd order: Tensor (32=93^2 = 9 components).

  • Index (Indicial) Notation and Einstein Summation Convention:

    • Developed to simplify complex tensor equations.

    • Free Index: Appears once in a term (e.g., ii in fi = \mathbf{f} ullet \mathbf{e}i). It implies the equation holds for all possible values of the index (1, 2, 3).

    • Dummy (Summation) Index: Appears exactly twice in a term. Summation is implied over the range of the index without a summation symbol (e.g., <em>if</em>i=f<em>1+f</em>2+f3\sum<em>i f</em>i = f<em>1 + f</em>2 + f_3).

    • Einstein Convention: If an index appears twice, it implies summation; for example, f=f<em>ie</em>i\mathbf{f} = f<em>i \mathbf{e}</em>i means f<em>1e</em>1+f<em>2e</em>2+f<em>3e</em>3f<em>1 \mathbf{e}</em>1 + f<em>2 \mathbf{e}</em>2 + f<em>3 \mathbf{e}</em>3.

    • Substitution Rule with Kronecker Delta: u<em>iδ</em>ij=uju<em>i \delta</em>{ij} = u_j. If an index of δ\delta is a dummy index, replace it in the rest of the term with the other index of δ\delta and remove δ\delta.

  • Orthonormal Base Vectors and Special Symbols:

    • Base Vectors: (e<em>1,e</em>2,e<em>3)(\mathbf{e}<em>1, \mathbf{e}</em>2, \mathbf{e}<em>3) are mutually perpendicular unit vectors, satisfying e</em>ie<em>j=δ</em>ij\mathbf{e}</em>i \bullet \mathbf{e}<em>j = \delta</em>{ij}.

    • Kronecker Delta (δij\delta_{ij}):

    • δij=1\delta_{ij} = 1 if i=ji = j

    • δij=0\delta_{ij} = 0 if iji \neq j

    • Permutation Symbol (Levi-Civita, eijke_{ijk}):

    • eijk=1e_{ijk} = 1 for cyclic permutations (1,2,3), (2,3,1), (3,1,2).

    • eijk=1e_{ijk} = -1 for anti-cyclic permutations (2,1,3), (1,3,2), (3,2,1).

    • eijk=0e_{ijk} = 0 if any two indices are equal.

  • Vector Operations in Index Notation:

    • Dot Product: ab=a<em>ib</em>i\mathbf{a} \bullet \mathbf{b} = a<em>i b</em>i

    • Cross Product: (a×b)<em>k=e</em>ijka<em>ib</em>j(\mathbf{a} \times \mathbf{b})<em>k = e</em>{ijk} a<em>i b</em>j

    • Scalar Triple Product: a(b×c)=e<em>ijka</em>ib<em>jc</em>k\mathbf{a} \bullet (\mathbf{b} \times \mathbf{c}) = e<em>{ijk} a</em>i b<em>j c</em>k

Ordinary Differential Equations (ODE): A differential equation that contains one or more functions of one independent variable and their derivatives. ODEs are commonly used to model various phenomena in science and engineering.

1. Existence and Uniqueness for Initial Value Problems (IVP)
  • An Initial Value Problem consists of a differential equation along with a condition specifying the value of the unknown function at a particular point.

  • For example, consider the equation:
    y=f(x,y)y' = f(x, y)
    with the initial condition:
    y(a)=by(a) = b.

  • Lipschitz Condition: The function ff is said to satisfy the Lipschitz condition if there exists a constant C > 0 such that:
    f(x,y<em>2)f(x,y</em>1)Cy<em>2y</em>1|f(x, y<em>2) - f(x, y</em>1)| \leq C |y<em>2 - y</em>1|.

    • This condition ensures that small changes in yy result in small changes in ff, which helps guarantee the stability and uniqueness of the solution.

  • Existence & Uniqueness Theorem: If the function ff is continuous and satisfies the Lipschitz condition in a neighborhood around (a,b)(a, b), there exists a unique solution y(x)y(x) that can be found in the local region around x=ax = a.

2. Linear Systems with Constant Coefficients
  • These are systems of ODEs that can be expressed in the matrix form:
    dydx=Ay\frac{d\mathbf{y}}{dx} = \mathbf{A} \mathbf{y}
    where:

    • y\mathbf{y} is a vector of dependent variables (e.g., y=(y<em>1 y</em>2  yn)\mathbf{y} = \begin{pmatrix} y<em>1 \ y</em>2 \ \vdots \ y_n \end{pmatrix})

    • A\mathbf{A} is a constant coefficient matrix.

  • Solution using Matrix Exponential: The solution is given by:
    y(x)=exAy(0)\mathbf{y}(x) = e^{x\mathbf{A}} \mathbf{y}(0).

    • Here, exAe^{x\mathbf{A}} is the matrix exponential, defined as:
      exA=I+xA+x2A22!+x3A33!+e^{x\mathbf{A}} = \mathbf{I} + x\mathbf{A} + \frac{x^2\mathbf{A}^2}{2!} + \frac{x^3\mathbf{A}^3}{3!} + \cdots.

  • Evaluation Strategies:

    • Diagonalizable Matrix ($\mathbf{A}$): If A\mathbf{A} can be diagonalized, we can express:
      exA=PexDP1e^{x\mathbf{A}} = \mathbf{P} e^{x\mathbf{D}} \mathbf{P}^{-1}, where D\mathbf{D} is a diagonal matrix containing the eigenvalues of A\mathbf{A}, and P\mathbf{P} is the matrix of eigenvectors.

    • Non-Diagonalizable Matrix: If A\mathbf{A} cannot be diagonalized, we use Jordan Canonical Forms or the Cayley-Hamilton Theorem for finding exAe^{x\mathbf{A}}.

3. Non-homogeneous Systems
  • A non-homogeneous system is of the form:
    dydx=Ay+b(x)\frac{d\mathbf{y}}{dx} = \mathbf{A}\mathbf{y} + \mathbf{b}(x), where b(x)\mathbf{b}(x) is a function that represents external input or influence.

  • Solution Form:

    • The general solution is:
      y(x)=y<em>h(x)+y</em>p(x)\mathbf{y}(x) = \mathbf{y}<em>h(x) + \mathbf{y}</em>p(x),
      where y<em>h(x)\mathbf{y}<em>h(x) is the solution to the corresponding homogeneous equation (where b(x)=0\mathbf{b}(x) = 0) and y</em>p(x)\mathbf{y}</em>p(x) is a particular solution to the non-homogeneous equation.

Summary

Ordinary differential equations allow us to express and solve for functions that depend on one variable and their derivatives. By understanding initial value problems, linear systems, and the distinction between homogeneous and non-homogeneous systems, one can develop strategies for finding solutions to these equations, which are foundational in many scientific disciplines.

Ordinary Differential Equations (ODE)

A differential equation that contains one or more functions of one independent variable and their derivatives. ODEs are commonly used to model various phenomena in science and engineering.

1. Existence and Uniqueness for Initial Value Problems (IVP)
  • An Initial Value Problem consists of a differential equation along with a condition specifying the value of the unknown function at a particular point.

  • For example, consider the equation:
    y=f(x,y)y' = f(x, y)
    with the initial condition:
    y(a)=by(a) = b.

  • Lipschitz Condition: The function ff is said to satisfy the Lipschitz condition if there exists a constant C > 0 such that:
    f(x,y<em>2)f(x,y</em>1)Cy<em>2y</em>1|f(x, y<em>2) - f(x, y</em>1)| \leq C |y<em>2 - y</em>1|.

    • This condition ensures that small changes in yy result in small changes in ff, which helps guarantee the stability and uniqueness of the solution.

  • Existence & Uniqueness Theorem: If the function ff is continuous and satisfies the Lipschitz condition in a neighborhood around (a,b)(a, b), there exists a unique solution y(x)y(x) that can be found in the local region around x=ax = a.

Problems:

Problem 1: Show that the function f(x,y)=x+yf(x,y) = x + y satisfies the Lipschitz condition in any bounded region.

Solution:
To show that ff satisfies the Lipschitz condition, we need to show that there exists a constant CC such that:
f(x<em>1,y</em>1)f(x<em>1,y</em>2)Cy<em>1y</em>2|f(x<em>1, y</em>1) - f(x<em>1, y</em>2)| \leq C |y<em>1 - y</em>2| for all y<em>1y<em>1 and y</em>2y</em>2 in the bounded region.

Calculate:
f(x<em>1,y</em>1)f(x<em>1,y</em>2)=y<em>1y</em>2|f(x<em>1, y</em>1) - f(x<em>1, y</em>2)| = |y<em>1 - y</em>2|.
Thus, we can take C=1C = 1, which satisfies the Lipschitz condition.

2. Linear Systems with Constant Coefficients
  • These are systems of ODEs that can be expressed in the matrix form:
    dydx=Ay\frac{d\mathbf{y}}{dx} = \mathbf{A} \mathbf{y}
    where:

    • y\mathbf{y} is a vector of dependent variables (e.g., y=(y<em>1 y</em>2  yn)\mathbf{y} = \begin{pmatrix} y<em>1 \ y</em>2 \ \vdots \ y_n \end{pmatrix})

    • A\mathbf{A} is a constant coefficient matrix.

  • Solution using Matrix Exponential: The solution is given by:
    y(x)=exAy(0)\mathbf{y}(x) = e^{x\mathbf{A}} \mathbf{y}(0).

    • Here, exAe^{x\mathbf{A}} is the matrix exponential, defined as:
      exA=I+xA+x2A22!+x3A33!+e^{x\mathbf{A}} = \mathbf{I} + x\mathbf{A} + \frac{x^2\mathbf{A}^2}{2!} + \frac{x^3\mathbf{A}^3}{3!} + \cdots.

Problems:

Problem 2: Solve the linear system given by dydx=(2amp;1 0amp;3)y\frac{d\mathbf{y}}{dx} = \begin{pmatrix} 2 &amp; 1 \ 0 &amp; 3 \end{pmatrix} \mathbf{y} with initial condition y(0)=(1 0)\mathbf{y}(0) = \begin{pmatrix} 1 \ 0 \end{pmatrix}.

Solution:

  1. Find the matrix A=(2amp;1 0amp;3)\mathbf{A} = \begin{pmatrix} 2 &amp; 1 \ 0 &amp; 3 \end{pmatrix}.

  2. Calculate the eigenvalues. The characteristic equation is:
    det(AλI)=0\det(\mathbf{A} - \lambda \mathbf{I}) = 0.
    This leads to λ<em>1=2\lambda<em>1 = 2 and λ</em>2=3\lambda</em>2 = 3.

  3. The matrix is diagonalizable:
    P=(1amp;0 0amp;1),D=(2amp;0 0amp;3)\mathbf{P} = \begin{pmatrix} 1 &amp; 0 \ 0 &amp; 1 \end{pmatrix}, \mathbf{D} = \begin{pmatrix} 2 &amp; 0 \ 0 &amp; 3 \end{pmatrix}.

  4. The solution is given by:
    exA=PexDP1=(e2xamp;0 0amp;e3x)e^{x\mathbf{A}} = \mathbf{P} e^{x\mathbf{D}} \mathbf{P}^{-1} = \begin{pmatrix} e^{2x} &amp; 0 \ 0 &amp; e^{3x} \end{pmatrix}.

  5. Therefore,
    y(x)=(e2xamp;0 0amp;e3x)(1 0)=(e2x 0)\mathbf{y}(x) = \begin{pmatrix} e^{2x} &amp; 0 \ 0 &amp; e^{3x} \end{pmatrix} \begin{pmatrix} 1 \ 0 \end{pmatrix} = \begin{pmatrix} e^{2x} \ 0 \end{pmatrix}.

3. Non-homogeneous Systems
  • A non-homogeneous system is of the form:
    dydx=Ay+b(x)\frac{d\mathbf{y}}{dx} = \mathbf{A}\mathbf{y} + \mathbf{b}(x), where b(x)\mathbf{b}(x) is a function that represents external input or influence.

  • Solution Form:

    • The general solution is:
      y(x)=y<em>h(x)+y</em>p(x)\mathbf{y}(x) = \mathbf{y}<em>h(x) + \mathbf{y}</em>p(x),
      where y<em>h(x)\mathbf{y}<em>h(x) is the solution to the corresponding homogeneous equation (where b(x)=0\mathbf{b}(x) = 0) and y</em>p(x)\mathbf{y}</em>p(x) is a particular solution to the non-homogeneous equation.

Problems:

Problem 3: Solve the non-homogeneous system dydx=(2amp;1 0amp;3)y+(ex 0)\frac{d\mathbf{y}}{dx} = \begin{pmatrix} 2 &amp; 1 \ 0 &amp; 3 \end{pmatrix} \mathbf{y} + \begin{pmatrix} e^{x} \ 0 \end{pmatrix} with initial condition y(0)=(1 0)\mathbf{y}(0) = \begin{pmatrix} 1 \ 0 \end{pmatrix}.

Solution:

  1. Solve the homogeneous part first:
    dy<em>hdx=(2amp;1 0amp;3)y</em>h\frac{d\mathbf{y}<em>h}{dx} = \begin{pmatrix} 2 &amp; 1 \ 0 &amp; 3 \end{pmatrix} \mathbf{y}</em>h.

  2. The initial condition gives the homogeneous solution:
    yh(0)=(e2x 0)\mathbf{y}_h(0) = \begin{pmatrix} e^{2x} \ 0 \end{pmatrix}.

  3. For the particular solution, guess a solution of the form:
    yp=(Aex B)\mathbf{y}_p = \begin{pmatrix} Ae^{x} \ B \end{pmatrix}.

  4. Substitute into the non-homogeneous equation to determine AA and BB.

  5. Finally, combine the homogeneous and particular solutions:
    y(x)=y<em>h(x)+y</em>p(x)\mathbf{y}(x) = \mathbf{y}<em>h(x) + \mathbf{y}</em>p(x).

Summary

Ordinary differential equations allow us to express and solve for functions that depend on one variable and their derivatives. By understanding initial value problems, linear systems, and the distinction between homogeneous and non-homogeneous systems, one can develop strategies for finding solutions to these equations, which are foundational in many scientific disciplines.