We begin by loading packagse for plotting, automatic differentiation, and symbolic manipulations:
using Plots
using Roots
using ForwardDiff
D(f, k=1) = k > 1 ? D(D(f),k-1) : x -> ForwardDiff.derivative(f, float(x))
using QuadGK
using SymPy
A Taylor polynomial of a function f(x) about x=c of degree n is formally defined by
When n=1 this is the familiar tangent line approximation. Higher orders, generally yield better approximations.
In julia
we can create this series numerically using the D
function. It is useful to write our function so that it returns a function, as is done with the following. We use a quadratic approximation for the default order and expand around 0 as the default (the Maclaurin polynomial).
function taylor(f, n=2; c::Real=0)
x -> f(c) + sum([D(f, k)(c)/factorial(k)*(x-c)^k for k in 1:n])
end
taylor (generic function with 2 methods)
We can see graphically that the Taylor polynomial approximates the function f(x) around the x=c. For example, we approximate cos(x) at x=0 and x=1:
f(x) = cos(x)
plot([f, taylor(f, 2, c=0), taylor(f, 2, c=pi/4)], -pi/2, pi/2)
We can see that taking higher orders can lead to better approximations. In the following we use a comprehension and splatting to avoid having to type plot([f, taylor(f,1), taylor(f,2), taylor(f,3), ..., taylor(f,6) ], 0, 4)
:
f(x) = exp(x)
fs = [f, [taylor(f,n) for n in 1:6]...]
plot(fs, 0, 4) # 6 polynomial approximations
You should be able to confirm that the higher-order approximations do a better job.
Mathematically, the above picture is governed by the following error bound for Taylor polynomials:
where z∗ is chosen to maximize the above expression between x and c. That is, the largest possible value (in absolute value) for the next term in the Taylor polynomial provides a bound for the error.
Verify graphically that the approximation log(1+x)≈x comes from using the Maclaurin polynomial of degree 1. The approximation is not good for all x. What seems like a reasonable range of x for where it is "good?"
Let f(x)=tan(x). Plot the first 6 Maclaurin polynomials and f over [−π/3,π/3]. Is the approximation good at x=0? Are any of the approximations good at π/3?
Let f(x)=sin(x). Plot the first 6 Maclaurin polynomials and f over [−π/2,π/2]. Is the approximation good at x=0? Are any of the approximations good at π/2?
Again for f(x)=sin(x). Plot the function and the Maclaurin polynomials of degree 3 and 4. Is the 4th degree polynomial a better approximation? Why or why not?
The function f(x)=e−x2 does not have an antiderivative, hence can not be integrated by the fundamental theorem of calculus. One can estimate the integral using Riemann sums or one can estimate the function by an easily integrable function and integrate that. Of course, Maclaurin polynomials are easily integrable.
Plot several Maclaurin polynomials that approximate f(x) over the interval [0,0.5]. Choose the lowest order one that graphically matches over that interval. Integrate that approximation and compare to the value given by:
f(x) = exp(-x^2)
quadgk(f, 0, 0.5)
(0.46128100641279246, 0.0)
Let f(x)=ex a monotone function. Let T3(x) be the 3rd-order Maclaurin polynomial. Verify graphically that over the interval [0,4] the difference |f(x)−T3(x)|<f⁗(4)/4!⋅(x−0)4. (We know that for this function f⁗(4) is monotone increasing, so is largest at the right end point, hence the use of f⁗(4) in the bound.)
Is the bound a "tight" bound over the entire interval, in that the actual error is close to the given bound? (Check graphically.)
The following two formulas come from different eras of physics:
Which we can express via:
k1(v) = v^2 / 2
k2(v; c=10) = c^2 * (1/sqrt(1 - (v/c)^2) - 1)
k2 (generic function with 1 method)
Both describe the kinetic energy, but one uses Einstein's theory of relativity. For any value of c>0, show that the two agree up to second order approximations by Maclaurin polynomials.
Here is a different motivation of the Maclaurin polynomial that can be explored symbolically. (This is a generalization of the tangent line, where the "polynomial" is the secant line, and the limiting values there combine to give the tangent line.)
For concreteness let f(x)=ex. For any b>0, there is just one fifth degree polynomial, p(x), satisfying p(i⋅b)=f(i⋅b) for i=0,1,2,3,4,5. This is polynomial interpolation at 6 points.
This particular solution can be found as follows using SymPy
:
@vars a0 a1 a2 a3 a4 a5 b
f(x) = exp(x)
g(x) = a0 + a1*x + a2*x^(2//1) + a3*x^(3//1) + a4*x^(4//1) + a5*x^(5//1) - f(x)
di = solve(Sym[g(i*b) for i in 0:5], [a0, a1,a2,a3,a4,a5])
At first glance, this seems to have nothing to do with the Maclaurin polynomial for f(x), T5(x)=1+x+x2/2+x3/6+x4/24+x5/120. But wait, let's take the limit as b goes to zero of each value in di
. For example, the fifth term:
limit(di[a5], b, 0)
The answer is the a5
coefficient of T5(x). Verify that this is the case for each of a0
, ..., a4
.