Unit 10 Notes: Taylor & Maclaurin Series (AP Calculus BC)
Taylor Polynomial Approximations
What a Taylor polynomial is
A Taylor polynomial is a polynomial that imitates a function near a specific point. The key idea is: if you match enough derivative information at a point, a polynomial can “hug” the function extremely well in a neighborhood of that point.
Concretely, suppose you care about a function f(x) near x=a. You build a polynomial P_n(x) of degree n so that it matches:
- the function value: P_n(a)=f(a)
- the first derivative: P_n'(a)=f'(a)
- the second derivative: P_n''(a)=f''(a)
- and so on, up through the nth derivative.
This creates a polynomial approximation that is “locally tuned” to the function’s behavior at a.
Why Taylor polynomials matter
Taylor polynomials matter because they let you:
- Approximate values of complicated functions using simple arithmetic.
- Estimate errors (how accurate your approximation is) using rigorous bounds.
- Set up Taylor series (infinite versions of these polynomials), which power many convergence and representation questions in Unit 10.
A good mental model: a Taylor polynomial is like taking a “snapshot” of the function at x=a, including slope, concavity, and higher-order shape, then reconstructing the nearby curve using a polynomial.
How the Taylor polynomial formula works
The nth-degree Taylor polynomial for f(x) about x=a is
P_n(x)=\sum_{k=0}^{n} \frac{f^{(k)}(a)}{k!}(x-a)^k
Here:
- f^{(k)}(a) means the kth derivative evaluated at a.
- k! (factorial) normalizes the coefficients.
- (x-a)^k shifts the approximation to be centered at a.
When a=0, it’s called a Maclaurin polynomial (a special case of Taylor).
Notation you’ll see (and how to translate it)
| Idea | Common notation | Meaning |
|---|---|---|
| nth Taylor polynomial about a | P_n(x) or T_n(x) | Polynomial approximation of degree n |
| Taylor series about a | \sum_{n=0}^{\infty} c_n(x-a)^n | Infinite power series centered at a |
| Maclaurin polynomial/series | Same but with a=0 | Centered at 0 |
Example 1: Build a Maclaurin polynomial for \sin(x)
You often know derivative cycles for trig functions, so \sin(x) is a classic.
We compute derivatives and evaluate at 0:
- f(x)=\sin(x) so f(0)=0
- f'(x)=\cos(x) so f'(0)=1
- f''(x)=-\sin(x) so f''(0)=0
- f'''(x)=-\cos(x) so f'''(0)=-1
- f^{(4)}(x)=\sin(x) so f^{(4)}(0)=0
- f^{(5)}(x)=\cos(x) so f^{(5)}(0)=1
The degree-5 Maclaurin polynomial is
P_5(x)=\sum_{k=0}^{5} \frac{f^{(k)}(0)}{k!}x^k
Only the nonzero derivative values contribute:
P_5(x)=x-\frac{x^3}{3!}+\frac{x^5}{5!}
So near 0,
\sin(x)\approx x-\frac{x^3}{6}+\frac{x^5}{120}
A common mistake here is forgetting factorials (for example, writing x-\frac{x^3}{3} instead of x-\frac{x^3}{6}).
Example 2: Use a Taylor polynomial to approximate a value
Approximate e^{0.1} using the degree-3 Maclaurin polynomial for e^x.
For f(x)=e^x, every derivative is e^x, so f^{(k)}(0)=1 for all k. Thus
P_3(x)=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}
Plug in x=0.1:
e^{0.1}\approx 1+0.1+\frac{0.1^2}{2}+\frac{0.1^3}{6}
Compute:
- 0.1^2=0.01 so \frac{0.01}{2}=0.005
- 0.1^3=0.001 so \frac{0.001}{6}\approx 0.0001667
So
e^{0.1}\approx 1.1051667
This is the basic move behind many AP questions: “Use a Taylor polynomial of degree n about a to approximate f(b).”
Exam Focus
- Typical question patterns:
- “Find the Taylor polynomial of degree n for f(x) about x=a.”
- “Use a Maclaurin polynomial to approximate f(c) and show work.”
- “Given values of derivatives at a, write P_n(x) (no re-differentiation needed).”
- Common mistakes:
- Mixing up a and x (writing powers of x when the center is a\neq 0; you need (x-a)^k).
- Forgetting factorials in coefficients.
- Using a polynomial beyond its “safe zone” (approximating far from a without any error reasoning).
Lagrange Error Bound
What the error term is
A Taylor polynomial is an approximation, so you need a way to quantify the gap:
R_n(x)=f(x)-P_n(x)
This difference R_n(x) is called the **remainder** (or error) after using the degree-n Taylor polynomial.
Why the Lagrange error bound matters
The AP exam often asks for a guaranteed bound on the error: not “what the exact error is” (usually hard), but “how big it could be at most.” The Lagrange Error Bound gives a clean way to do that if you can bound the next derivative.
This is especially important when you approximate numerical values (like \sin(0.2) or e^{0.1}) and must justify the number of correct decimal places.
The Lagrange Error Bound formula (how it works)
If you use the degree-n Taylor polynomial about a, and if f^{(n+1)}(t) exists on the interval between a and x, then
|R_n(x)|\le \frac{M}{(n+1)!}|x-a|^{n+1}
where M is any number such that
|f^{(n+1)}(t)|\le M
for all t between a and x.
Interpretation:
- The error depends on the size of the next derivative (captured by M).
- The error shrinks quickly when |x-a| is small and n is larger, because factorials grow fast.
A subtle but crucial point: the bound uses the maximum possible size of the n+1 derivative on the interval. If you underestimate M, your “bound” may be false.
Example 1: Bound the error for approximating e^{0.1} with P_3(x)
We previously used P_3(x) about a=0 to approximate e^{0.1}. Now bound the error.
Here f(x)=e^x and n=3, so we need f^{(4)}(x)=e^x.
On the interval between 0 and 0.1, the maximum of e^t occurs at t=0.1 (because e^t increases). So we can take
M=e^{0.1}
Then
|R_3(0.1)|\le \frac{e^{0.1}}{4!}|0.1|^4
Compute pieces:
- 4!=24
- 0.1^4=0.0001
So
|R_3(0.1)|\le \frac{e^{0.1}}{24}\cdot 0.0001
Since e^{0.1}\approx 1.1052, the bound is approximately
|R_3(0.1)|\le \frac{1.1052}{24}\cdot 0.0001\approx 0.0000046
So the approximation is accurate to about the millionths place (and certainly to 4 decimal places).
Example 2: Choosing M correctly
Suppose you approximate \sin(x) near 0 using P_5(x) and want an error bound at x=0.3.
Here n=5, so we need the 6th derivative. The derivatives of \sin(x) cycle, and the 6th derivative is -\sin(x). So
|f^{(6)}(t)|=|\sin(t)|\le 1
for all real t. You can safely choose M=1.
Then
|R_5(0.3)|\le \frac{1}{6!}|0.3|^6
This is usually enough for AP justification; you don’t need to fully decimal-evaluate unless asked.
Exam Focus
- Typical question patterns:
- “Use the Lagrange Error Bound to find an upper bound on the error when approximating f(x) by P_n(x) at x=b.”
- “Find the degree n needed so the approximation error is less than a given tolerance.”
- “Justify that the approximation is within 0.001 (or has a certain number of correct decimals).”
- Common mistakes:
- Using f^{(n)} instead of f^{(n+1)} in the bound.
- Picking M from the wrong interval (you must use the interval between a and the target x).
- Treating the bound as the exact error; it’s only a guaranteed maximum.
Radius and Interval of Convergence of Power Series
What a power series is
A power series is an infinite polynomial-like expression, typically centered at some point a:
\sum_{n=0}^{\infty} c_n(x-a)^n
For each fixed x, this becomes an infinite series of numbers. The big question is: for which x values does it converge?
Why convergence matters
If a power series converges, it defines a function (you can plug in x and get a finite value). If it diverges, it doesn’t represent a meaningful value there.
On the AP exam, you’re often asked to:
- find the radius of convergence (how far from the center it converges)
- find the interval of convergence (the exact set of x values, including endpoint checks)
The radius vs. the interval (what’s the difference?)
For many power series centered at a, there exists a number R such that:
- the series converges for |x-a|
That R is the radius of convergence. The interval of convergence is typically
(a-R,a+R)
plus whatever endpoints actually converge.
How to find the radius (Ratio Test is the workhorse)
The most common method is the Ratio Test applied to the general term.
If your series is
\sum_{n=0}^{\infty} u_n
and
L=\lim_{n\to\infty} \left|\frac{u_{n+1}}{u_n}\right|
then:
- if L
In a power series, u_n depends on x, so the Ratio Test usually produces an inequality in x that gives |x-a|