Iterative Methods for Solving Partial Differential Equations

Iterative Methods

  • Finite Difference Methods

    • Used for solving partial differential equations (PDEs).

    • Can lead to large matrices, often around the size of 500 x 500.

    • General Gaussian elimination is inefficient for these matrices because:

    • They are usually sparse, meaning many zero entries.

    • Standard Gaussian elimination alters the matrix structure, converting zeros into non-zeros and hence reducing sparsity.

Iterative Methods Overview

  • Preferred Method: For very large, sparse matrices, the use of iterative methods is recommended.

    • Definition: Iterative methods generate a sequence of approximations that converge towards the solution.

    • They do not provide exact solutions but can be continued until a satisfactory error tolerance is achieved.

    • Benefit: They require less storage because they only work with a few vectors of length n at a time, unlike methods that manipulate entire matrices directly.

Classical Iterative Methods

  • Equations Representation: In classical iterative methods, equations are reformulated such that each unknown variable is expressed in terms of the others.

Jacobi Method

  • Initial Values: Begin with initial estimates for the unknowns, represented as $x1, x2, x3, ext{…}, xn$.

    • If no estimates are available, use $x^{(1)} = 0$.

  • Calculation Process:

    • The method proceeds by inserting initial values into the right-hand side of the equations to compute a second approximation for unknowns:

    x^{(1)}i = rac{1}{a{ii}} igg( bi - extstyle extsum{j=1, j
    eq i}^{n} a{ij} x^{(1)}j igg)

    • This new set of approximations is plugged back into the equations in successive iterations until the relative error becomes sufficiently small.

Example 1: Jacobi Method Interpretation

  • Matrix Setup: For a system $Ax = b$, where,

    • A = \begin{pmatrix}-1 & 4 & 1 \ 16 & -7 & 1 \ b = \begin{pmatrix}-2 \

  • Iterations: Start with initial guess $x^{(0)} = \begin{pmatrix}5 \ 0 \ 0 \end{pmatrix}$

  • After executing 12 iterations:

    • Final estimates:

    • x^{(12)}1 = 0.4837, \ x^{(12)}2 = -0.1793, \ x^{(12)}_3 = -0.7989

  • Relative residual:

    • \frac{||b - Ax^{(12)}||}{||b||_2} = 21.4500 \times 10^{-6}

Gauss-Seidel Iterative Method

  • Initialization: Start with known approximate values or set to 0.

  • Calculation Process: Use the most recently computed values in the calculations to refine further iterations, leading to higher efficiency compared to the Jacobi method.

Gauss-Seidel Iteration Formula

  • The formula is as follows:

    x^{(k)}i = \frac{1}{a{ii}} \bigg( bi - \textstyle\sum{j=1}^{i-1} a{ij} x^{(k)}{j} - \textstyle\sum{j=i+1}^{n} a{ij} x^{(k-1)}_{j} \bigg)

  • Apply recursively until convergence is achieved.

Example 2: Gauss-Seidel Method Execution

  • Matrix Recap: Use the same system from Example 1

  • Start with

    • x^{(0)} = \begin{pmatrix}0\0\0\end{pmatrix}

  • Final estimates after 12 iterations look similar to Jacobi's:

  • Relative residual confirmed as 2.8183 \times 10^{-7}

Successive Overrelaxation (SOR) Method

  • Purpose: Developed to speed up convergence of the Gauss-Seidel method.

  • Mechanism: It blends the previously computed value and the new value together using a weighted average with relaxation parameter $$.

    • SOR Formula:

x^{(k)}i = (1 - \omega)x^{(k-1)}i + \frac{\omega}{a{ii}} \left( bi - \textstyle\sum{j=1}^{i-1} a{ij} x^{(k)}j - \textstyle\sum{j=i+1}^{n} a{ij} x^{(k-1)}j \right)

  • If $ = 1$, the method reverts to the Gauss-Seidel method:

    • If $ > 1$, it leads to overrelaxation, enhancing convergence.

    • If $ < 1$, it indicates underrelaxation, slowing down convergence.

Example 3: Applying SOR

  • Given System: A = \begin{pmatrix} 1 & 1 & 1 \ 1 & 2 & 1 \ 1 & 1 & 3 \end{pmatrix}, b = \begin{pmatrix} 1 & 5 & 7 \end{pmatrix}

  • Exact Solution: [−11, 6, 4].

  • Iterations: Use $\omega = 1.1$ and run for 15 iterations,

  • Initial values yield results after several iterations close to the actual solution leading to:

    • Final estimates of x^{(15)} = \begin{pmatrix} -11 \end{pmatrix}

  • The relative residual after 15 iterations is 4.72 \times 10^{-7}.

SOR Algorithm Structure

  • Code Function:

function SOR(A,b,x0,w,tol,maxiter) 
    % Computes the solution using SOR iteration. 
end
t
  • Condition checks for relative residual stopping criteria:
    \text{relresid} = \frac{||b - Ax||2}{||b||2}

Convergence of Basic Iterative Methods

  • Convergence Criteria: Jacobi, Gauss-Seidel, and SOR methods do not guarantee convergence under all circumstances.

  • Matrix Form: To enable a deeper analysis, express iterations in terms of matrix form where $B$ is the iteration matrix.

Matrix Formulations

  • Jacobi Method Matrix:

  • x^{(k)} = BJ x^{(k-1)} + CJ

  • Where:

    • For $BJ$: BJ = -D^{-1}(L + U)

    • For $CJ$: CJ = D^{-1}b

  • Gauss-Seidel Method Matrix:

  • x^{(k)} = B{GS} x^{(k-1)} + C{GS}

  • Where:

    • For $B{GS}$: B{GS} = -(L + D)^{-1} U

    • For $C{GS}$: C{GS} = (L + D)^{-1} b

  • SOR Method Matrix:

  • x^{(k)} = B{SOR} x^{(k-1)} + C{SOR}

  • Where:

    • For $B{SOR}$: B{SOR} = (D + \omega L)^{-1}((1 - \omega)D - \omega U)

    • For $C{SOR}$: C{SOR} = \omega (D + \omega L)^{-1} b

Convergence Conditions

  • Condition Analysis: The iterative methods converge under certain conditions:

    • The spectral radius can be used as an indicator for convergence rates.

    • If $p(B) < 1$, the iteration converges.

Special Cases for Convergence

  • Strict Diagonal Dominance: Ensures convergence of Jacobi and Gauss-Seidel method.

  • Conditions: If matrix $A$ is strictly diagonally dominant, guaranteed convergence is achievable.

Choosing the Relaxation Parameter for SOR

  • Optimal Value: A poor choice of $$ drastically affects convergence rates.

    • Optimal $$ falls between 0 and 2.

  • Healthiest Cases: When $p(B_{SOR}) < 1$, convergence is achieved.

Example of Optimal Relaxation Parameter Calculation

  • Backtesting: Using the function optomega(A) to find optimal $$.

  • The result exemplifies efficient convergence through tweaked parameters leading to desired residuals.

Poisson's Equation Overview

  • Significance: Poisson's equation is crucial in applied mathematics, appearing in various fields including:

    • astronomy, heat flow, fluid dynamics, and electromagnetism.

  • Mathematical Condition: Defined over a bounded area with clear boundary conditions to solve for a desired function $u(x,y)$.

Finite Difference Methods on Poisson's Equation

  • For simplification, consider R as unit square $ ext{where } (0 ext{≤ x ≤ 1, } ext{0 ≤ y ≤ 1})$.

  • Conversion to difference equations leads to solutions approximating the actual answer.

Iterative Method Application on Poisson's Equation

  • The overall setup leads to the conclusion that SOR iteration also applies effectively under appropriately defined boundary conditions.

  • Example case through numerical approximation strategy through algorithmic implementations.