PU

ENGSCI 211 Mathematical Modelling 2 Flashcards

An ordinary differential equation (ODE) describes the relationship between a dependent variable y and an independent variable t, denoted as y(t). The order of an ODE is determined by the highest derivative involved. A linear ODE has coefficients that are constants or depend only on the independent variable, following the form a0(t)y + a1(t)\,\frac{dy}{dt} + a2(t)\,\frac{d^2y}{dt^2} + … + an(t)\,\frac{d^ny}{dt^n} = f(t). Non-linear ODEs, exemplified by \frac{d^2θ}{dt^2} + ω^2 \sin(θ) = 0, can be approximated by linear ODEs such as \frac{d^2θ}{dt^2} + ω^2θ = 0 for small angles. A homogeneous ODE includes all terms involving the dependent variable; otherwise, it is nonhomogeneous, indicating external forcing.

If y1 = f(t) and y2 = g(t) are solutions to a linear homogeneous ODE, then y = ay1 + by2 is also a solution, which demonstrates linear superposition. For example, given the ODE \frac{d^2y}{dt^2} + y = 0, y1 = \sin(t) and y2 = \cos(t) are solutions, making y = 2\sin(t) + 3\cos(t) also a solution.

An initial value problem (IVP) consists of an ODE along with initial conditions, where the number of initial conditions matches the order of the ODE. Examples include \frac{dv}{dt} = g, v(0) = v0 (first-order, linear, nonhomogeneous), \frac{dT}{dt} = -α(T - T{air}), T(0) = T0 (first-order, linear, nonhomogeneous), and \frac{d^2θ}{dt^2} + ω^2 \sin(θ) = 0, θ(0) = θ0, \frac{dθ}{dt}(0) = 0 (second-order, nonlinear, homogeneous).

Analytic solution methods include direct integration for ODEs of the form \frac{d^ny}{dt^n} = f(t), separation of variables for first-order homogeneous ODEs of the form \frac{dy}{dt} = g(t)h(y) leading to \frac{dy}{h(y)} = g(t)dt, integrating factor for first-order linear nonhomogeneous ODEs \frac{dy}{dt} + g(t)y = f(t), and exponential substitution for linear homogeneous ODEs with constant coefficients a1y + a2\,\frac{dy}{dt} + … + \frac{d^ny}{dt^n} = 0, using a trial solution y = Ce^{λt} to find the characteristic equation.

A general solution satisfies the differential equation but includes unknown coefficients. A particular solution satisfies both the differential equation and the initial conditions, providing a specific instance of the general solution.

For example, direct integration of \frac{dv}{dt} = 3, v(0) = 0 yields a general solution v = 3t + c and a particular solution v = 3t. Separation of variables for y\,\frac{dy}{dt} = -t, y(0) = 1 gives y^2 = -t^2 + d and the particular solution y^2 = 1 - t^2. Exponential substitution for y' + 5y = 0 leads to the characteristic equation λ + 5 = 0, resulting in the general solution y = Ce^{-5t}.

A vector is a one-dimensional array, such as \vec{a} = \begin{bmatrix} a1 \ a2 \ a3 \end{bmatrix}, and can also be notated as \vec{a} = (a1, a2, a3) or \vec{a} = a1\,\hat{ı} + a2\,\hat{ȷ} + a3\,\hat{k}. The magnitude of a vector is given by |\vec{a}| = \sqrt{(a1)^2 + (a2)^2 + (a3)^2}, and a unit vector is \hat{a} = \frac{\vec{a}}{|\vec{a}|}.

The vector dot product represents the “overlap” between two vectors, calculated as \begin{bmatrix} a1 \ a2 \ a3 \end{bmatrix} \cdot \begin{bmatrix} b1 \ b2 \ b3 \end{bmatrix} = a1b1 + a2b2 + a3b3 or \vec{a} \cdot \vec{b} = |\vec{a}| \, |\vec{b}| \cos θ, resulting in a scalar.

A matrix is a rectangular array of elements with order (rows x columns), such as a 4x3 matrix, with elements accessed by lowercase letters, e.g., a_{11} = -5. Matrix operations include addition/subtraction (requiring identical order), scalar multiplication (multiplying each element), and matrix multiplication (where the number of columns of the first matrix must equal the number of rows of the second matrix and is generally non-commutative).

The determinant of a 2x2 matrix is calculated as det(A) = a{11}a{22} - a{12}a{21}, and its inverse is A^{-1} = \frac{1}{det(A)} \begin{bmatrix} a{22} & -a{12} \ -a{21} & a{11} \end{bmatrix}. The transpose of a matrix swaps rows and columns, denoted as A{m \times n} \rightarrow A^T{n \times m}, and a symmetric matrix is a square matrix equal to its transpose.

The vector cross product produces a vector orthogonal to both input vectors, with a magnitude of |\vec{a} \times \vec{b}| = |\vec{a}| \,|\vec{b}| \,\sin(θ), calculated using a determinant.

Special matrices include diagonal matrices (non-zero elements only on the diagonal), triangular matrices (non-zero elements only on or below/above the diagonal), and the identity matrix (a diagonal matrix with ones on the diagonal).

For multiplication with inverse and identity matrices, A^{-1}A = AA^{-1} = I and IA = AI = A. Matrix premultiplication is used to solve A\,\vec{x} = \vec{b} by premultiplying by A^{-1}, resulting in \vec{x} = A^{-1}\,\vec{b}.

Two-dimensional Cartesian coordinates (x, y) represent distances along the x and y axes from the origin. Polar coordinates (r, θ) represent the distance from the origin (r) and the positive angular displacement from the x-axis (θ). The conversion equations are x = r \cos θ, y = r \sin θ, r = +\sqrt{x^2 + y^2}, and

\begin{cases} \tan^{-1}(\frac{y}{x}) & x > 0, y > 0 \ \tan^{-1}(\frac{y}{x}) + π & x < 0, y > 0 \ \tan^{-1}(\frac{y}{x}) + π & x < 0, y < 0 \ \tan^{-1}(\frac{y}{x}) + 2π & x > 0, y < 0 \end{cases}.

Methods for solving a system of linear equations include matrix inverse, Gaussian elimination, and LU factorization. A system is represented in matrix form as A\,\vec{x} = \vec{b}, where A is the coefficient matrix, \vec{x} is the variable vector, and \,\vec{b} is the constant vector. Solving by matrix inverse requires A to be nonsingular but is time-consuming for large matrices. Gaussian elimination transforms A\,\vec{x} = \vec{b} into an upper triangular form for solving using backward substitution.

LU factorization decomposes the coefficient matrix A into the product of a lower triangular matrix L and an upper triangular matrix U, such that A = LU. It uses Gaussian elimination to find U, with multipliers recorded in L. Solving steps involve solving L\,\vec{y} = \vec{b} for the intermediate solution \,\vec{y} using forward substitution, and then solving U\,\vec{x} = \vec{y} for the solution vector \,\vec{x} using backward substitution. LU factorization is efficient for solving multiple systems with the same A but different \,\vec{b}. Using LU factorization, the efficient calculation of the matrix

An ordinary differential equation (ODE) describes the relationship between a dependent variable y and an independent variable t, denoted as y(t). The order of an ODE is determined by the highest derivative involved. A linear ODE has coefficients that are constants or depend only on the independent variable, following the form a0(t)y + a1(t)\,\frac{dy}{dt} + a2(t)\,\frac{d^2y}{dt^2} + … + an(t)\,\frac{d^ny}{dt^n} = f(t). Non-linear ODEs, exemplified by \frac{d^2θ}{dt^2} + ω^2 \sin(θ) = 0, can be approximated by linear ODEs such as \frac{d^2θ}{dt^2} + ω^2θ = 0 for small angles. A homogeneous ODE includes all terms involving the dependent variable; otherwise, it is nonhomogeneous, indicating external forcing.

If y1 = f(t) and y2 = g(t) are solutions to a linear homogeneous ODE, then y = ay1 + by2 is also a solution, which demonstrates linear superposition. For example, given the ODE \frac{d^2y}{dt^2} + y = 0, y1 = \sin(t) and y2 = \cos(t) are solutions, making y = 2\sin(t) + 3\cos(t) also a solution.

An initial value problem (IVP) consists of an ODE along with initial conditions, where the number of initial conditions matches the order of the ODE. Examples include \frac{dv}{dt} = g, v(0) = v0 (first-order, linear, nonhomogeneous), \frac{dT}{dt} = -α(T - T{air}), T(0) = T0 (first-order, linear, nonhomogeneous), and \frac{d^2θ}{dt^2} + ω^2 \sin(θ) = 0, θ(0) = θ0, \frac{dθ}{dt}(0) = 0 (second-order, nonlinear, homogeneous).

Analytic solution methods include direct integration for ODEs of the form \frac{d^ny}{dt^n} = f(t), separation of variables for first-order homogeneous ODEs of the form \frac{dy}{dt} = g(t)h(y) leading to \frac{dy}{h(y)} = g(t)dt, integrating factor for first-order linear nonhomogeneous ODEs \frac{dy}{dt} + g(t)y = f(t), and exponential substitution for linear homogeneous ODEs with constant coefficients a1y + a2\,\frac{dy}{dt} + … + \frac{d^ny}{dt^n} = 0, using a trial solution y = Ce^{λt} to find the characteristic equation.

A general solution satisfies the differential equation but includes unknown coefficients. A particular solution satisfies both the differential equation and the initial conditions, providing a specific instance of the general solution.

For example, direct integration of \frac{dv}{dt} = 3, v(0) = 0 yields a general solution v = 3t + c and a particular solution v = 3t. Separation of variables for y\,\frac{dy}{dt} = -t, y(0) = 1 gives y^2 = -t^2 + d and the particular solution y^2 = 1 - t^2. Exponential substitution for y' + 5y = 0 leads to the characteristic equation λ + 5 = 0, resulting in the general solution y = Ce^{-5t}.

A vector is a one-dimensional array, such as \vec{a} = \begin{bmatrix} a1 \ a2 \ a3 \end{bmatrix}, and can also be notated as \vec{a} = (a1, a2, a3) or \vec{a} = a1\,\hat{ı} + a2\,\hat{ȷ} + a3\,\hat{k}. The magnitude of a vector is given by |\vec{a}| = \sqrt{(a1)^2 + (a2)^2 + (a3)^2}, and a unit vector is \hat{a} = \frac{\vec{a}}{|\vec{a}|}.

The vector dot product represents the “overlap” between two vectors, calculated as \begin{bmatrix} a1 \ a2 \ a3 \end{bmatrix} \cdot \begin{bmatrix} b1 \ b2 \ b3 \end{bmatrix} = a1b1 + a2b2 + a3b3 or \vec{a} \cdot \vec{b} = |\vec{a}| \, |\vec{b}| \cos θ, resulting in a scalar.

A matrix is a rectangular array of elements with order (rows x columns), such as a 4x3 matrix, with elements accessed by lowercase letters, e.g., a_{11} = -5. Matrix operations include addition/subtraction (requiring identical order), scalar multiplication (multiplying each element), and matrix multiplication (where the number of columns of the first matrix must equal the number of rows of the second matrix and is generally non-commutative).

The determinant of a 2x2 matrix is calculated as det(A) = a{11}a{22} - a{12}a{21}, and its inverse is A^{-1} = \frac{1}{det(A)} \begin{bmatrix} a{22} & -a{12} \ -a{21} & a{11} \end{bmatrix}. The transpose of a matrix swaps rows and columns, denoted as A{m \times n} \rightarrow A^T{n \times m}, and a symmetric matrix is a square matrix equal to its transpose.

The vector cross product produces a vector orthogonal to both input vectors, with a magnitude of |\vec{a} \times \vec{b}| = |\vec{a}| \,|\vec{b}| \,\sin(θ), calculated using a determinant.

Special matrices include diagonal matrices (non-zero elements only on the diagonal), triangular matrices (non-zero elements only on or below/above the diagonal), and the identity matrix (a diagonal matrix with ones on the diagonal).

For multiplication with inverse and identity matrices, A^{-1}A = AA^{-1} = I and IA = AI = A. Matrix premultiplication is used to solve A\,\vec{x} = \vec{b} by premultiplying by A^{-1}, resulting in \vec{x} = A^{-1}\,\vec{b}.

Two-dimensional Cartesian coordinates (x, y) represent distances along the x and y axes from the origin. Polar coordinates (r, θ) represent the distance from the origin (r) and the positive angular displacement from the x-axis (θ). The conversion equations are x = r \cos θ, y = r \sin θ, r = +\sqrt{x^2 + y^2}, and

\begin{cases} \tan^{-1}(\frac{y}{x}) & x > 0, y > 0 \ \tan^{-1}(\frac{y}{x}) + π & x < 0, y > 0 \ \tan^{-1}(\frac{y}{x}) + π & x < 0, y < 0 \ \tan^{-1}(\frac{y}{x}) + 2π & x > 0, y < 0 \end{cases}.

Methods for solving a system of linear equations include matrix inverse, Gaussian elimination, and LU factorization. A system is represented in matrix form as A\,\vec{x} = \vec{b}, where A is the coefficient matrix, \vec{x} is the variable vector, and \,\vec{b} is the constant vector. Solving by matrix inverse requires A to be nonsingular but is time-consuming for large matrices. Gaussian elimination transforms A\,\vec{x} = \vec{b} into an upper triangular form for solving using backward substitution.

LU factorization decomposes the coefficient matrix A into the product of a lower triangular matrix L and an upper triangular matrix U, such that A = LU. It uses Gaussian elimination to find U, with multipliers recorded in L. Solving steps involve solving L\,\vec{y} = \vec{b} for the intermediate solution \,\vec{y} using forward substitution, and then solving U\,\vec{x} = \vec{y} for the solution vector \,\vec{x} using backward substitution. LU factorization is efficient for solving multiple systems with the same A but different \,\vec{b}. Using LU factorization, the efficient calculation of the matrix