Numerical Methods

Roots of Nonlinear Equations

Institution: PAMANTASAN NG LUNGSOD NG MAYNILA
Department: College of Engineering and Technology (CET)
Instructor: Engr. Reynaldo Ted L. Peñas II, MScEngg


Introduction to Roots of Nonlinear Equations

  • The goal is to find the solutions of the equation f(x)=0f(x) = 0 where the function ff is defined.

  • The solutions (values of xx) are known as the roots of the equation f(x)=0f(x) = 0 or the zeroes of the function f(x)f(x).


Nature of Roots

Types of Roots
  • Roots or zeroes may be:

    • Real: Actual numerical values on the real number line.

    • Complex: Often computed in polynomials relating to applications such as damped vibrations (seldom significant in practical computations).

  • The approximate locations of the roots can be effectively estimated by plotting the function.


Extraction of Roots

Key Points to Consider
  • All methods of finding roots are iterative procedures that begin with a starting point (an initial estimate of the root).

  • An initial estimate is crucial:

    • A poor choice may lead to failure to converge or converge to the “wrong” root.


Stopping Criteria for Iterative Methods

  • Criteria for stopping iterations based on errors:

    • Identify actual error versus significant error:

    • xextpresentx_{ ext{present}} = estimated root in the current iteration

    • xextpreviousx_{ ext{previous}} = estimated root in the last iteration

    • nn = number of significant figures


Methods for Finding Roots of Nonlinear Equations

  • Various methods include:

    • Direct or Incremental Search Method

    • Interval-Halving Method

    • Method of False-Position

    • Newton-Raphson 1st Method

    • Newton-Raphson 2nd Method

    • Secant Method

    • Fixed-Point Iteration

    • Bairstow’s Method


Direct or Incremental Search Method

Description
  • If f(x<em>1)f(x<em>1) and f(x</em>2)f(x</em>2) have opposite signs, then at least one root exists in the interval [x<em>1,x</em>2][x<em>1, x</em>2].

  • A small enough interval likely contains a single root.

  • Zeroes can be detected by evaluating the function at various points and looking for a change in sign.

Algorithm
  1. Estimate the initial interval [x<em>i,x</em>i+1][x<em>i, x</em>{i+1}] bounding the root (where f(x<em>i)f(x<em>i) and f(x</em>i+1)f(x</em>{i+1}) must be opposite signs).

  2. Evaluate f(x<em>i)f(x<em>i) and f(x</em>i+1)f(x</em>{i+1}).

  3. Check the product of f(x<em>i)imesf(x</em>i+1)f(x<em>i) imes f(x</em>{i+1}):

    • If positive: Set x<em>i+1x<em>{i+1} as a new lower bound (x</em>ix</em>i).

    • If negative: Estimate a smaller extΔxext{Δ}x and generate a new xi+1x_{i+1}.

  4. Repeat steps 2 & 3 until stopping criteria satisfied.

Example
  • Test Function: f(x)=exextcos(x)f(x) = e^{-x} - ext{cos}(x)

  • Initial estimates: x<em>i=1x<em>i = 1, x</em>i+1=1.5x</em>{i+1} = 1.5, extΔx=0.5ext{Δ}x = 0.5

Simulation of Iterations

Iteration (i)

Root

Function

xix_i

extΔxext{Δ}x

xi+1x_{i+1}

f(xi)f(x_i)

f(xi+1)f(x_{i+1})

f(x<em>i)f(x</em>i+1)f(x<em>i)f(x</em>{i+1})

1

1

1

0.5

1.5

-0.17242

0.15239

-

2

1

1

0.25

1.25

-0.17242

-0.02882

+

3

1.25

1.25

0.25

1.5

-0.02882

0.15239

-

10


Bisection Method of Bolzano

Description
  • The method is based on the principle that if there is a root in the interval [x<em>1,x</em>2][x<em>1, x</em>2], then f(x1) f(x2) < 0.

  • To halve the interval, compute f(x<em>3)f(x<em>3), where x</em>3=rac12(x<em>1+x</em>2)x</em>3 = rac{1}{2}(x<em>1 + x</em>2) is the midpoint.

  • Depending on the sign of f(x<em>2)f(x</em>3)f(x<em>2) f(x</em>3):

    • If negative, root lies in [x<em>2,x</em>3][x<em>2, x</em>3], so replace x<em>1x<em>1 with x</em>3x</em>3.

    • If positive, root lies in [x<em>1,x</em>3][x<em>1, x</em>3], so replace x<em>2x<em>2 with x</em>3x</em>3.

Algorithm
  1. Estimate initial interval [x<em>i,x</em>i+1][x<em>i, x</em>{i+1}] bounding the root (where f(x<em>i)f(x<em>i) and f(x</em>i+1)f(x</em>{i+1}) must be opposite signs).

  2. Compute x<em>rx<em>r (the midpoint): x</em>r=racx<em>i+x</em>i+12x</em>r = rac{x<em>i+x</em>{i+1}}{2}.

  3. Evaluate f(x<em>i)f(x<em>i), f(x</em>i+1)f(x</em>{i+1}), and f(xr)f(x_r).

  4. Check the product of f(x<em>i)imesf(x</em>r)f(x<em>i) imes f(x</em>r):

    • If positive: Define x<em>rx<em>r as the new lower bound (x</em>ix</em>i).

    • If negative: Define x<em>rx<em>r as the new upper bound (x</em>i+1x</em>{i+1}).

  5. Repeat steps 3 & 4 until stopping criteria satisfied.

Example
  • Test Function: f(x)=exextcos(x)f(x) = e^{-x} - ext{cos}(x)

  • Initial estimates: x<em>i=1x<em>i = 1, x</em>i+1=1.5x</em>{i+1} = 1.5, xr=1.25x_r = 1.25

Simulation of Iterations

Iteration (i)

Root

Function

xix_i

xi+1x_{i+1}

xrx_r

f(xi)f(x_i)

f(xi+1)f(x_{i+1})

f(xr)f(x_r)

f(x<em>i)f(x</em>r)f(x<em>i)f(x</em>r)

1

1

1

1.5

1.25

-0.17242

0.15239

-0.02882

+

2

1

1.25

1.5

1.375

-0.02882

0.15239

0.05829

-

3

1.25

1.25

1.375

1.3125

-0.02882

0.05829

0.01371

-


Method of False Position (Regula Falsi)

Description
  • Similar to bisection, this method starts with an interval [x<em>i,x</em>i+1][x<em>i, x</em>{i+1}] believed to contain the solution for f(x)=0f(x) = 0.

  • It approximates the curve of f(x)f(x) within the interval via a straight line connecting f(x<em>i)f(x<em>i) and f(x</em>i+1)f(x</em>{i+1}). The intersection of this line with the x-axis gives a new approximation for the root.

Algorithm
  1. Estimate the initial interval [x<em>i,x</em>i+1][x<em>i, x</em>{i+1}] bounding the root.

  2. Compute for new root x<em>rx<em>r where x</em>r=x<em>i+1rac(f(x</em>i+1)(x<em>ix</em>i+1))(f(x<em>i)f(x</em>i+1))x</em>r = x<em>{i+1} - rac{(f(x</em>{i+1})(x<em>i-x</em>{i+1}))}{(f(x<em>i)-f(x</em>{i+1}))}.

  3. Evaluate f(x<em>i)f(x<em>i), f(x</em>i+1)f(x</em>{i+1}), and f(xr)f(x_r).

  4. Check the product of f(x<em>i)imesf(x</em>r)f(x<em>i) imes f(x</em>r):

    • If positive: Define x<em>rx<em>r as the new lower bound (x</em>ix</em>i).

    • If negative: Define x<em>rx<em>r as the new upper bound (x</em>i+1x</em>{i+1}).

  5. Repeat steps 3 & 4 until stopping criteria satisfied.

Example
  • Test Function: f(x)=exextcos(x)f(x) = e^{-x} - ext{cos}(x)

  • Initial estimates: x<em>i=1x<em>i = 1, x</em>i+1=1.5x</em>{i+1} = 1.5, xr=1.265416357x_r = 1.265416357

Simulation of Iterations

Iteration (i)

Root

Function

xix_i

xi+1x_{i+1}

xrx_r

f(xi)f(x_i)

f(xi+1)f(x_{i+1})

f(xr)f(x_r)

f(xi)f(xr)f(xi)f(xr)

1

1

1

1.5

1.26542

-0.17242

0.15239

-0.01853

+

2

1

1.26542

1.5

1.29085

-0.01853

0.15239

-0.00127

+

3

1

1.29085

1.5

1.29257

-0.00127

0.15239

-0.00008

+


Newton-Raphson 1st Method (Method of Tangents)

Description
  • This is a notable method for finding roots due to its simplicity and speed.

  • The method's drawback is that it requires the derivative f(x)f'(x) of the function along with the function f(x)f(x) itself.

  • It is derived from the Taylor Series expansion.

/

Algorithm
  1. Estimate the initial root approximation xix_i.

  2. Compute new root x<em>i+1x<em>{i+1} where: x</em>i+1=x<em>iracf(x</em>i)f(xi)x</em>{i+1} = x<em>i - rac{f(x</em>i)}{f'(x_i)}.

  3. Evaluate both f(x<em>i)f(x<em>i) and f(x</em>i+1)f(x</em>{i+1}).

  4. Repeat steps 2 & 3 until stopping criteria satisfied.

Example
  • Test Function: f(x)=exextcos(x)f(x) = e^{-x} - ext{cos}(x)

  • Initial estimate: xi=1x_i = 1

  • Corresponding derivative: f(x)=ex+extsin(x)f'(x) = -e^{-x} + ext{sin}(x)

  • First iteration gives: xi+1=1.36407505x_{i+1} = 1.36407505

Simulation of Iterations

Iteration (i)

Root

Function

xix_i

xi+1x_{i+1}

f(xi)f(x_i)

f(xi)f'(x_i)

f(xi+1)f(x_{i+1})

1

1

1

1.36408

-0.17242

0.47359

0.05036

2

1

1.36408

1.29442

0.05036

0.72309

0.00119

3

1

1.29442

1.29270

0.00119

0.68800

0.00000


Newton-Raphson 2nd Method

Description
  • This method is calculus-based and derived from the first method, utilizing truncated Taylor’s series expansion.

  • It employs both 1st and 2nd order derivatives to enhance the root approximation of f(x)=0f(x) = 0.

Algorithm
  1. Estimate the initial root approximation xix_i.

  2. Compute new root x<em>i+1x<em>{i+1}: x</em>i+1=x<em>iracf(x</em>i)f(x<em>i)racf(x</em>i)2f(xi)x</em>{i+1} = x<em>i - rac{f(x</em>i)}{f'(x<em>i)} - rac{f''(x</em>i)}{2 f'(x_i)}.

  3. Evaluate f(x<em>i)f(x<em>i) and f(x</em>i+1)f(x</em>{i+1}).

  4. Repeat steps 2 & 3 until stopping criteria satisfied.

Example
  • Test Function: f(x)=exextcos(x)f(x) = e^{-x} - ext{cos}(x)

  • Initial estimate: xi=1x_i = 1

  • Derivatives: f(x)=ex+extsin(x)f'(x) = -e^{-x} + ext{sin}(x), f(x)=ex+extcos(x)f''(x) = e^{-x} + ext{cos}(x)

  • First iteration gives: xi+1=1.320290083x_{i+1} = 1.320290083

Simulation of Iterations

Iteration (i)

Root

Function

xix_i

xi+1x_{i+1}

f(xi)f(x_i)

f(xi)f'(x_i)

f(xi)f''(x_i)

f(xi+1)f(x_{i+1})

1

1

1

1.26987

-0.17242

0.47359

-0.01554

2

1

1.26987

1.29269

-0.01554

0.67419

0.57728

3

1

1.29269


Secant Method

Description
  • The Secant Method does not involve derivative calculations but converges rapidly, similar to Regula Falsi.

  • It utilizes two initial points.

Algorithm
  1. Estimate two initial points x<em>ix<em>i and x</em>i1x</em>{i-1}.

  2. Evaluate f(x<em>i)f(x<em>i) and f(x</em>i1)f(x</em>{i-1}).

  3. Compute new root x<em>i+1x<em>{i+1}: x</em>i+1=x<em>iracf(x</em>i)(x<em>ix</em>i1)f(x<em>i)f(x</em>i1)x</em>{i+1} = x<em>i - rac{f(x</em>i) (x<em>i - x</em>{i-1})}{f(x<em>i) - f(x</em>{i-1})}.

  4. The current x<em>ix<em>i will be transferred to the previous x</em>i1x</em>{i-1} for the next iteration.

  5. Repeat steps 2 & 3 until stopping criteria satisfied.

Example
  • Test Function: f(x)=exextcos(x)f(x) = e^{-x} - ext{cos}(x)

  • Initial estimates: x<em>i=1x<em>i = 1, x</em>i1=0.5x</em>{i-1} = 0.5

  • First iteration gives: xi+1=1.874097878x_{i+1} = 1.874097878

Simulation of Iterations

Iteration (i)

Root

Function

xi1x_{i-1}

xix_i

xi+1x_{i+1}

f(xi1)f(x_{i-1})

f(xi)f(x_i)

f(xi+1)f(x_{i+1})

1

1

0.5

1

1.87410

-0.27105

-0.17242

0.45217

2

1

1

1.87410

1.24130

-0.17242

0.45217

-0.03456

3

1

1.87410

1.24130

1.28623

0.45217

-0.03456

-0.00443


Fixed-Point Iteration

Description
  • Also known as Picard iteration, this method attempts to establish an auxiliary function g(x)g(x) from f(x)f(x) to find the root x<em>0x<em>0 such that f(x</em>0)=0f(x</em>0) = 0. The auxiliary form is: x<em>0=g(x</em>0)x<em>0 = g(x</em>0).

  • The method's convergence does not greatly depend on its initial estimate as long as the fixed point exists in the interval [x<em>1,x</em>2][x<em>1, x</em>2].

Example
  • Equation: x=ext(6+3ext(6+x))x = ext{√}(6 + 3 ext{√}(6 + x))

  • The equivalent representation: x36=3ext(6+x)x^3 - 6 = 3 ext{√}(6 + x)

  • Auxiliary function becomes: g(x)=3ext(x+6)g(x) = 3 ext{√}(x + 6).

Simulation of Iterations

Iteration (i)

Root

Function

xix_i

g(xi)g(x_i)

1

0

0

1.81712

2

1.81712

1.81712

1.98464

3

1.98464

1.98464

1.99872


Bairstow's Method

Description
  • This method addresses special issues with polynomials that have complex roots, allowing the extraction of quadratic factors using only real arithmetic.

  • When polynomials with real coefficients have complex roots, these occur in conjugate pairs corresponding to a quadratic factor of polynomial Pn(x)P_n(x).

Algorithm
  1. Initiate: Consider the general nth-degree polynomial P<em>n(x)=a</em>nxn+a<em>n1xn1++a</em>0P<em>n(x) = a</em>n x^n + a<em>{n-1} x^{n-1} + … + a</em>0.

  2. Factor out the quadratic: P(x)=(x2rxs)Qn2(x)+extRemainderP(x) = (x^2 - rx - s)Q_{n-2}(x) + ext{Remainder}.

  3. Set up a system of equations and compute coefficients iteratively until convergence of factors rr and ss occurs.

  4. Once the quadratic factor is determined, roots are extracted two at a time; the polynomial reduces by two degrees enabling repetition of the method.

Example
  • Initial guesses for rr and ss: r<em>1=1.5r<em>1 = 1.5, s</em>1=2.5s</em>1 = -2.5.

Iteration Steps to Find Coefficients
  1. Begin with coefficients stored in arrays created by preceding values.

  2. Conduct iterations to compute values b<em>ib<em>i and c</em>ic</em>i noting adjustments for rr and ss.


Conclusion

  • Document strategies and algorithms underpinning roots of nonlinear equations, with detailed simulations showcasing iterative methods and practical examples to illuminate their application in engineering and computational contexts.