Taylor Series and Taylor's Formula – Lecture Notes

Taylor series overview

  • Taylor series are built from derivatives of a function at a chosen base point a inside an open interval where the function is well-behaved.

  • Goal: start from a function f and express it as a power series around a:
    f(x)=\sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}\,(x-a)^k.

  • Requirements: the function must have derivatives of all orders on the open interval (the notion of "smooth" here).

  • Open intervals are used because, given any interval, there is always some nonempty open subinterval inside it where derivatives exist.

  • The coefficients are determined by derivatives at the base point a; the factorials arise from repeatedly differentiating powers of (x-a).

  • Conceptual point: differentiation of powers brings down exponents, producing factorials in the coefficients.

  • Special case: Maclaurin series is the Taylor series at a=0:
    f(x)=\sum_{k=0}^{\infty} \frac{f^{(k)}(0)}{k!}\,x^k.

  • For a convergent power series, there is a unique correspondence with the function (uniqueness of power series): the series uniquely determines the function within its interval of convergence.

Why these series look the way they do

  • Differentiating a term like $x^k$ brings down k and reduces the exponent by 1; after k differentiations you get a factor of $k!$ and $x^{0}$ as the last term.
  • Hence, the kth coefficient in the Taylor series is tied to the kth derivative at a, divided by k!.
  • This is why the general form uses factorials in the denominator and derivatives evaluated at the base point.

Base point choice and open interval

  • The Taylor series is anchored at a point a in the interval I where f has derivatives of all orders.
  • The interval of convergence is an open interval around a (call it $(a-R, a+R)$ for some R>0). Endpoints can sometimes be included, but in general we start with an open interval.

Maclaurin series (special case)

  • When a=0, the Taylor series simplifies to the Maclaurin series:
    f(x)=\sum_{k=0}^{\infty} \frac{f^{(k)}(0)}{k!}\,x^k.

Example 1: exponential function $e^x$

  • Derivatives: $f^{(k)}(x)=e^x$, so $f^{(k)}(0)=e^0=1$ for all k.
  • Maclaurin series:
    e^x=\sum_{k=0}^{\infty} \frac{x^k}{k!}.
  • This is particularly nice because every derivative is the same as the function itself, giving a very simple, highly usable series.

Example 2: sine function $\sin x$

  • Derivative pattern:
    • $f(x)=\sin x$, $f'(x)=\cos x$, $f''(x)=-\sin x$, $f'''(x)=-\cos x$, $f^{(4)}(x)=\sin x$, …
    • Evaluated at 0: $\sin(0)=0$, $\cos(0)=1$, $-\sin(0)=0$, $-\cos(0)=-1$, $\sin(0)=0$, …
  • The pattern has period 4, alternating signs and zeros for even derivatives at 0.
  • Maclaurin series (collecting only odd powers):
    \sin x= x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+\cdots = \sum_{n=0}^{\infty}(-1)^n\frac{x^{2n+1}}{(2n+1)!}.
  • Insight: only odd powers appear; signs alternate because of the derivative pattern.

Small-angle approximation and remainder intuition

  • A quick application of the sine series is the small-angle approximation: for small |x|, \sin x \approx x.
  • Rigorous justification requires the remainder term from Taylor’s theorem:
    \sin x = \Big(\text{sum of first few terms}\Big) + Rn(x), and we want $\lim{x\to 0} \frac{Rn(x)}{x}=0$ for a fixed n (or more generally $Rn(x) \to 0$ as $x\to 0$).
  • Taylor’s theorem provides a precise remainder: for sufficient derivatives,
    R_n(x)=\frac{f^{(n+1)}(z)}{(n+1)!}\,(x-a)^{n+1},
    where z lies strictly between a and x.
  • In the sine example, using the expansion and remainder shows the limit $\lim_{x\to 0} \frac{\sin x}{x}=1$ with a rigorous remainder control once Taylor’s formula is applied.

Taylor's formula and remainder (key theorem)

  • Define the nth degree Taylor polynomial (partial sum):
    Tn(x)=\sum{k=0}^{n} \frac{f^{(k)}(a)}{k!}\,(x-a)^k,
    which is a degree-n polynomial.
  • Remainder term: $Rn(x)=f(x)-Tn(x)$.
  • Taylor's formula (with remainder):
    f(x)=T_n(x)+\frac{f^{(n+1)}(z)}{(n+1)!}\,(x-a)^{n+1},
    for some $z$ between $a$ and $x$ (assuming $f^{(n+1)}$ exists on the interval).
  • Interpretation of the remainder:
    • It measures the error of truncating the Taylor series after n terms.
    • The criterion for convergence to f is that $R_n(x)\to 0$ as $n\to\infty$ (for fixed x, within the interval of convergence).

Analytic vs smooth

  • Analytic functions: functions for which a convergent power series around a point actually represents the function on a (possibly smaller) interval.
  • Analyticity is stronger than smoothness (the existence of derivatives of all orders).
  • Not all smooth functions are analytic, and not all analytic functions may extend beyond certain radii of convergence; there are subtle examples (counterexamples exist where a function is smooth but not analytic).
  • If a function has a power-series representation that converges on an interval, the coefficients must match the Taylor coefficients: if
    f(x)=\sum{k=0}^{\infty} ck\,(x-a)^k,
    then evaluating at $x=a$ gives $c0=f(a)$; differentiating term-by-term and evaluating at $x=a$ gives successively c1=f'(a),\quad c2=\frac{f''(a)}{2!},\quad ck=\frac{f^{(k)}(a)}{k!},
    so the power-series coefficients coincide with Taylor coefficients.
  • The result shows that analytic representations, when they exist, coincide with the Taylor series.
  • In general, a Taylor series can be written for any smooth function, but convergence to the original function is not guaranteed without the power-series (analyticity) property.

Example: proving convergence of the Maclaurin series for $e^x$ via Taylor’s remainder

  • We know the Maclaurin expansion:
    e^x=\sum_{k=0}^{\infty} \frac{x^k}{k!}.
  • Remainder for $e^x$ with a=0:
    R_n(x)=\frac{e^{z}}{(n+1)!}\,x^{n+1},
    where $z$ lies between 0 and x.
  • Bounding: for x≥0, $0\le Rn(x) \le e^{x}\frac{x^{n+1}}{(n+1)!}$ and also $0\le Rn(x) \le e^{0}\frac{x^{n+1}}{(n+1)!}$.
  • Use the ratio test on the bounding term: the ratio is
    \frac{R{n+1}(x)}{Rn(x)} = \frac{x}{n+2} \xrightarrow[n\to\infty]{} 0,
    hence $R_n(x)\to 0$ for all fixed x, so the Maclaurin series converges to $e^x$ on the entire real line.
  • The presenter notes that the x<0 case is similar and left as an exercise.

Applications and techniques using Taylor series

  • Small angle approximation justification (revisited): use the sine Maclaurin series and remainder to show $
    \lim_{x\to 0} \frac{\sin x}{x}=1$ rigorously.
  • Numerical integration via Taylor series:
    • When an antiderivative is not elementary, approximate the integrand by its Taylor series and integrate term-by-term.
    • Example: approximate $\int0^1 \sin(x^2)\,dx$ using the Maclaurin series for $\sin x$ with $x$ replaced by $x^2$: \sin(x^2)=\sum{n=0}^{\infty}(-1)^n\frac{x^{4n+2}}{(2n+1)!},
      so
      \int0^1 \sin(x^2)\,dx = \sum{n=0}^{\infty}(-1)^n \frac{1}{(2n+1)!}\int0^1 x^{4n+2}\,dx = \sum{n=0}^{\infty}(-1)^n\frac{1}{(2n+1)!(4n+3)}.
    • The first few terms give a good approximation because the terms decay rapidly (factorial growth in the denominator).
  • Another perspective highlighted in the talk:
    • Power-series representations can be used to transform difficult problems (like certain integrals) into tractable series.
    • Other tools (mentioned but not developed in class) include Fourier transforms and complex-analytic extensions, which connect to power-series behavior in the complex plane.
  • Practical exercise note from the talk:
    • On an assignment, you may be asked to determine how many terms are needed to achieve a specified accuracy (e.g., accuracy of 0.1) using Taylor’s remainder to bound the error.

Summary: key takeaways

  • Taylor series provide a local power-series representation of a smooth function around a point a: f(x)=\sum_{k=0}^{\infty} \frac{f^{(k)}(a)}{k!}\,(x-a)^k.
  • The Maclaurin series is the special case centered at 0.
  • The remainder $Rn(x)$ controls the error of truncation; Taylor's theorem guarantees a precise form: Rn(x)=\frac{f^{(n+1)}(z)}{(n+1)!}\,(x-a)^{n+1},\quad z\text{ between }a\text{ and }x.
  • Analyticity (existence of a convergent power-series representation) is stronger than smoothness; not every smooth function is analytic, and when a power-series representation exists, its coefficients agree with the Taylor coefficients.
  • Examples illustrate the concrete series:
    • e^x=\sum_{k=0}^{\infty} \frac{x^k}{k!}
    • \sin x=\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n+1}}{(2n+1)!}
  • Theoretical and practical applications include validating the small-angle approximation rigorously and using Taylor series to approximate otherwise intractable integrals.

Connections to broader topics (brief mentions)

  • The DNA metaphor: power-series representations can uniquely determine a function on their interval of convergence.
  • In complex analysis, extending a real-valued function to the complex plane and studying its power-series behavior informs contour integration and other techniques (not covered in detail in this class).
  • Conceptual bridge to numerical methods: Taylor series underpin many approximation techniques and error estimations used in computations.