High-order polynomial approximations to special functions and integral transforms

The aim of this example is to show that using TaylorIntegration.jl we can evaluate with high accuracy definite integrals of the form

$$ I_{\omega}(x) = \int_{x_0}^xf_{\omega}(t)\mathrm{d} t $$

for smooth integrands $f$ which might depend on a parameter $\omega$. In particular, combining high-acurracy integration with jet transport techniques, we are able to obtain high-order, dense polynomial representations of special functions and integral transforms. The trick is to translate the definite integral problem to an initial value problem for an explicit ODE, thanks to the fundamental theorem of calculus.

1. Error function: $\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x \exp(-t^2) dt$

As a first example, we will evaluate the error function, defined by the integral $\operatorname{erf}(x)=\frac{2}{\sqrt{\pi}}\int_0^x \exp(-t^2) dt$ using TaylorIntegration.jl. We proceed as follows:

First, define a suitable ODE system equivalent to the definite integral problem at hand, and then integrate it. Indeed, by the fundamental theorem of calculus we have

$$ \operatorname{erf} '(x) =\frac{2}{\sqrt{\pi}}\exp(-x^2) $$

where the prime $'$ denotes differentiation with respect to $x$. Therefore, we reinterpret $\operatorname{erf}(x)$ as the solution of the initial value problem defined by the ODE

$$ \frac{d\operatorname{erf}}{dx} = \frac{2}{\sqrt{\pi}}\exp(-x^2) $$

subject to the initial condition $\operatorname{erf}(0)=0$. We will integrate this ODE in order to evaluate $\operatorname{erf}(x)$.

We start by loading TaylorIntegration and Plots with the GR backend:

In [1]:
using TaylorIntegration, Plots
gr()
Out[1]:
Plots.GRBackend()

This is the RHS of the ODE we want to solve (below, t is the independent variable):

In [2]:
f(t, x) = (2/sqrt(pi))*exp(-t^2)
Out[2]:
f (generic function with 1 method)

The parameters we will use for the Taylor integration are:

In [3]:
x0 = 0.0 #the initial value of the independent variable; in this case, x
xmax = 10.0 #the final value of the independent variable; in this case, x
erf0 = 0.0 #the initial condition for erf: erf(0) = 0
const order = 25 #the order of the Taylor expansions
const abstol = 1e-20 #the absolute local error tolerance
Out[3]:
1.0e-20

The initial conditions are:

In [4]:
x0, erf0
Out[4]:
(0.0, 0.0)

Then, we proceed to integrate:

In [5]:
@time xv, erfv = taylorinteg(f, erf0, x0, xmax, order, abstol, maxsteps=5000);
  0.335947 seconds (147.59 k allocations: 7.766 MiB)

The final $x$ and $\operatorname{erf}$ values are:

In [6]:
xv[end], erfv[end]
Out[6]:
(10.0, 1.0)

Now, how does $\operatorname{erf}(x)$ look as a function of $x$?

In [7]:
plot(xv, erfv, leg=false)
scatter!(xv, erfv, ms=3.0)

xlabel!("x")
ylabel!("erf(x)")
title!("Evaluating erf(x) using TaylorIntegration")
Out[7]:
0.0 2.5 5.0 7.5 10.0 0.00 0.25 0.50 0.75 1.00 Evaluating erf(x) using TaylorIntegration x erf(x)

At a first glance, this looks great! But, how do these values actually compare to SpecialFunctions.jl's erf implementation?

In [8]:
import SpecialFunctions: erf
In [9]:
?Base.erf
Out[9]:
erf(x)

Compute the error function of x, defined by $\frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} dt$ for arbitrary complex x.

Let's plot the absolute difference between our integrated values for $\operatorname{erf}(x)$ and Base.erf, in units of the machine epsilon, as a function of $x$:

In [10]:
plot(
    xv, (erfv-erf.(xv))/eps(),
    xaxis = "x",
    yaxis = "Diff: integrated vs Base.erf",
    title = "Absolute difference between erf(x) and I(x)",
    leg = false
)

scatter!(xv, (erfv-erf.(xv))/eps(), ms=3.0)
Out[10]:
0.0 2.5 5.0 7.5 10.0 0.0 0.1 0.2 0.3 0.4 0.5 Absolute difference between erf(x) and I(x) x Diff: integrated vs Base.erf

So the error is at most half times eps() at each time step!!!!

2. High-accuracy evaluation of elliptic integrals with BigFloats

In this section, we will evaluate the elliptic integral $K$ using BigFloats, up to an accuracy of about $10^{-77}$ (yes, 1E-77!). It is well known that the complete elliptic integral of the first kind, $K(k)$, may be evaluated via the integral

$$ K(k) = \int_0^{\pi/2}\frac{ \mathrm{d}t }{ \sqrt{1-k^2\sin^2 t} } $$

Again, we can evaluate this integral using TaylorIntegration.jl, if we reinterpret this definite integral as the solution of the ODE defined by

$$ \frac{\mathrm{d}K_k}{\mathrm{d}t} = \frac{ 1 }{ \sqrt{1-k^2\sin^2 t} } $$

integrated from $t=0$ to $t=\pi/2$ for a given parameter $k$, subject to the initial condition $K_k(t=0)=0$.

For definiteness, we will select the parameter value $k_0=0.6051864057360395$.

In [11]:
k0 = 0.6051864057360395
k = big"0.6051864057360395"
Out[11]:
6.051864057360394999999999999999999999999999999999999999999999999999999999999984e-01

Then, $k^2$ is:

In [12]:
k^2
Out[12]:
3.662505856877062234277491455602499999999999999999999999999999999999999999999966e-01

The RHS for the corresponding ODE may be written as:

In [13]:
G(t,x) = 1/sqrt(1-(k*sin(t))^2)
Out[13]:
G (generic function with 1 method)

Since we will evaluate this integral using BigFloats, we will select an order of 90, and a local absolute tolerance near eps(BigFloat):

In [14]:
const bigorder = 90
const bigabstol = big"1.0E-77"
Out[14]:
1.000000000000000000000000000000000000000000000000000000000000000000000000000004e-77

Now, we perform the integration:

In [15]:
@time tv, xv = taylorinteg(G, 0.0, 0.0, BigFloat(π)/2, bigorder, bigabstol);
  2.561874 seconds (18.57 M allocations: 924.474 MiB, 34.29% gc time)

The value of $K(k)$ corresponds to the last value of xv[end]:

In [16]:
xv[end]
Out[16]:
1.754812577961136589734955441226555912336722052661065250219997312395371169887374

Using Elliptic.jl, we can obtain the first few digits of the value of $K(k)$, for the selected value of $k$:

In [17]:
import Elliptic: K
In [18]:
K(k^2)
Out[18]:
1.7548125779611365

Furthermore, using ArbFloats.jl, we can obtain an extended precision evaluation of the elliptic integral $K$ (thanks to @JeffreySarnoff for this!):

In [19]:
using ArbFloats

arb_precision = 270 # bits, select: 100..3200
setprecision(ArbFloat, arb_precision)

function ellipticK(x::ArbFloat)
    arb_one    = one(ArbFloat)
    arb_halfpi = ArbFloat(pi)/2
        
    k = sqrt( abs(x - arb_one) )
    k = agm(arb_one, k)
    k = arb_halfpi / k

    return k
end
Out[19]:
ellipticK (generic function with 1 method)
In [20]:
showall(  ellipticK( (@ArbFloat 0.6051864057360395)^2 )  )
1.754812577961136589734955441226555912336722052661065250219997312395371169887377

Which differs from the result obtained by TaylorIntegration.jl only at the last digit !!! Thus, TaylorIntegration.jl is able to evaluate special functions with really high accuracy.

3. High-order polynomial approximations to special functions

Now, what would happen if we wanted to evaluate the elliptic integral $K(k)$, not for one particular value, but for several values of the parameter $k$? Should we evaluate them one by one? If the set of values of $k$ lie within a small neighborhood of a given value of $k$, say $k_0$, then we can combine TaylorSeries.jl and TaylorIntegration.jl in order to obtain a high-order polynomial approximation to the special function $K$, around a given value $k_0$. Indeed, we may think of $k$ as a Taylor polynomial in $t$ ($t$ is a dummy variable), such that $k=k_0+t$, in order to obtain a polynomial representation of $K(k)$ for $k \in (k_0-\epsilon,k_0+\epsilon)$ (where $\epsilon$ is a small number).

In [21]:
using TaylorSeries
In [22]:
t = Taylor1(15) # the dummy variable k around which we are expanding the parameter k
Out[22]:
 1.0 t + 𝒪(t¹⁶)
In [23]:
k = k0 + t # re-thinking k as a Taylor polynomial in the dummy variable t
Out[23]:
 0.6051864057360395 + 1.0 t + 𝒪(t¹⁶)

We will now integrate an ODE system where the first equation corresponds to the original problem of evaluating the elliptic integral $K$, whereas the second equation corresponds to the parameter $k$, but since the parameter $k$ is constant wrt the integrating variable in the definition of $K$, we have $\dot k = 0$:

In [24]:
function G!(t, x, dx)
    dx[1] = 1/sqrt(1-(x[2]*sin(t))^2)
    dx[2] = zero(x[2])
    nothing
end
Out[24]:
G! (generic function with 1 method)

The initial conditions for this new system are:

In [25]:
x0 = [zero(t), k]
Out[25]:
2-element Array{TaylorSeries.Taylor1{Float64},1}:
                         0.0 + 𝒪(t¹⁶)
  0.6051864057360395 + 1.0 t + 𝒪(t¹⁶)

Now, we can integrate the system above:

In [26]:
@time tv, xv = taylorinteg(G!, x0, 0.0, π/2, 28, 1E-20);
  1.147146 seconds (2.48 M allocations: 186.757 MiB, 4.43% gc time)

Now, xv[end,1] corresponds to a 15-th order Taylor expansion around $k_0$ of the complete elliptic integral of the first kind, $K$:

In [27]:
xv[end,1]
Out[27]:
 1.7548125779611359 + 0.7902217547288612 t + 1.4862006794969997 t² + 2.4723818499390178 t³ + 4.877283348455082 t⁴ + 9.96559188808282 t⁵ + 21.224636460154077 t⁶ + 46.32399841153163 t⁷ + 103.09191906811044 t⁸ + 232.82644779824827 t⁹ + 532.0565841031579 t¹⁰ + 1227.5568728333137 t¹¹ + 2854.8206184572873 t¹² + 6683.860182924387 t¹³ + 15738.519788056736 t¹⁴ + 37243.53671901891 t¹⁵ + 𝒪(t¹⁶)

We can exploit the function-like behavior (i.e., the "callability") of Taylor1 variables in order to call this Taylor expansion of $K$ as if it were a function:

In [28]:
ellipticK_taylor(x) = xv[end,1](x)
Out[28]:
ellipticK_taylor (generic function with 1 method)

Since we're expanding around $k_0$, note that ellipticK_taylor(x) ≈ Elliptic.K((x+k0)^2):

In [29]:
ellipticK_taylor(0.0)-Elliptic.K(k0^2)
Out[29]:
-6.661338147750939e-16

Now, we will plot ellipticK_taylor and Elliptic.K for $k \in [k_0-0.35,k_0+0.35]$:

In [30]:
k_radius = 0.35
x = k0-k_radius:0.025:k0+k_radius
Out[30]:
0.25518640573603957:0.025:0.9301864057360396
In [31]:
plot(x, ellipticK_taylor.(x-k0), lab="Taylor approx")
plot!(x, Elliptic.K.(x.^2), lab="Elliptic.K")
xlabel!("k")
ylabel!("Elliptic integral K value")
title!("Taylor polynomial approximation vs Elliptic.K")
Out[31]:
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.6 1.8 2.0 2.2 2.4 Taylor polynomial approximation vs Elliptic.K k Elliptic integral K value Taylor approx Elliptic.K

Now, let's plot the absolute difference between ellipticK_taylor and Elliptic.K for $k$ in the same interval:

In [32]:
plot(x, log10.(abs.(ellipticK_taylor.(x-k0)-Elliptic.K.(x.^2))), lab="log10(abs(diff))")
xlabel!("k")
ylabel!("abs difference (log10)")
title!("K: Taylor approximation vs Elliptic.K")
Out[32]:
0.3 0.4 0.5 0.6 0.7 0.8 0.9 -14 -12 -10 -8 -6 -4 K: Taylor approximation vs Elliptic.K k abs difference (log10) log10(abs(diff))

So we see that near $k_0$ the difference between the Taylor polynomial approximation and Elliptic.K is below $10^{-14}$; and at the bounds of the interval the difference grows to $10^{-3}$. We can obtain better approximations using higher-order Taylor polynomials, up to a point where Float64 can no longer give better accuracies and extended precision should be used instead.

Furthermore, it is practically as efficient (if not even better!) to evaluate the elliptic integral $K$ with Elliptic.jl than it is to evaluate the Taylor polynomial approximation for it that we have obtained above:

In [33]:
using BenchmarkTools
In [34]:
k1 = k0+0.1
k1sq = k1^2
Out[34]:
0.49728786683491416
In [35]:
@btime Elliptic.K(k1sq)
  268.143 ns (1 allocation: 16 bytes)
Out[35]:
1.8517837163252886
In [36]:
@btime ellipticK_taylor(0.1)
  114.540 ns (1 allocation: 16 bytes)
Out[36]:
1.8517837163136597

Note that this approach is suitable to deal with more complicated or even custom-defined special functions!

4. High-order polynomial approximations to integral transforms

In this section, we will try to obtain a high-order polynomial approximation to the Fourier transform of a smooth function $f$:

$$ \hat f(\omega) =\frac{1}{\sqrt{2\pi}} \int_{-\infty}^\infty f(t)\operatorname{e}^{-i\omega t}\mathrm{d}t $$

We will deal with two cases:

  • $f_1(t)=\exp(-t^2)$
  • $f_2(t)=\cos(t)$ for $t \in [-4\pi,4\pi]$; $0$ otherwise.
In [37]:
#f1 and f2:
f1(t) = exp(-t^2)
f2(t) = cos(t) #for t ∈ [−4π,4π]; 0 otherwise.

#f1 and f2 Fourier transform defining ODEs:
function F1!(t, x, dx)
    dx[1] = f1(t)*exp(-im*x[2]*t)/(sqrt(2π))
    dx[2] = zero(x[2])
    nothing
end

function F2!(t, x, dx)
    dx[1] = f2(t)*exp(-im*x[2]*t)/(sqrt(2π))
    dx[2] = zero(x[2])
    nothing
end

F1(ω) = exp(-(ω^2)/4)/sqrt(2) #analytical expression of Fourier transform of f1
F2(ω) = sqrt(2/π)*ω*sin(4π*ω)/(ω^2-1) #analytical expression of Fourier transform of f1
Out[37]:
F2 (generic function with 1 method)

$\omega_0$ is the $\omega$ value around which we will expand. That is, we will have $\omega = \omega_0 + \delta \omega$:

In [38]:
ω0 = 0.0+0im
Out[38]:
0.0 + 0.0im
In [39]:
δω = Taylor1(Complex{Float64}, 30)
Out[39]:
  ( 1.0 ) t + 𝒪(t³¹)
In [40]:
ω = ω0 + δω
Out[40]:
  ( 1.0 ) t + 𝒪(t³¹)

The initial condition for the ODE is:

In [41]:
x0 = [zero(δω), ω]
Out[41]:
2-element Array{TaylorSeries.Taylor1{Complex{Float64}},1}:
  0.0 + 0.0im + 𝒪(t³¹)
    ( 1.0 ) t + 𝒪(t³¹)

Now, we perform the Taylor integration:

In [42]:
@time tv1, xv1 = taylorinteg(F1!, x0, -10.0, 10.0, 28, 1E-20);
@time tv2, xv2 = taylorinteg(F2!, x0, -4pi, 4pi, 28, 1E-20);
  1.780823 seconds (4.23 M allocations: 746.953 MiB, 7.92% gc time)
  0.843982 seconds (3.55 M allocations: 692.804 MiB, 13.15% gc time)

We extract the corresponding real and imaginary parts of the Fourier transforms:

In [43]:
r1 = real(xv1[end,1])
i1 = imag(xv1[end,1])

r2 = real(xv2[end,1])
i2 = imag(xv2[end,1]);

y1 and y2 are plotting auxiliaries:

In [44]:
y1 = linspace(-4,4,100)
y2 = linspace(-0.99,0.99,100)
Out[44]:
-0.99:0.02:0.99

Plot the original functions and their Fourier transforms real and imaginary parts:

In [45]:
# original f and Fourier(f), 1
plot(y1, f1.(y1), lab="f(t) = exp(-t²)")
plot!(y1, r1.(y1), lab="real(F₁(ω))")
plot!(y1, i1.(y1), lab="imag(F₁(ω))")
Out[45]:
-4 -2 0 2 4 0.0 0.2 0.4 0.6 0.8 f(t) = exp(-t²) real(F₁(?)) imag(F₁(?))
In [46]:
# original f and Fourier(f), 2
plot(y2, f2.(y2), lab="f₂(t) = cos(t)")
plot!(y2, r2.(y2), lab="real(F₂(ω))")
plot!(y2, i2.(y2), lab="imag(F₂(ω))")
Out[46]:
-0.75 -0.50 -0.25 0.00 0.25 0.50 0.75 0 1 2 3 4 f₂(t) = cos(t) real(F₂(?)) imag(F₂(?))

Plot the analytical formulae of the corresponding Fourier transforms vs the integrated expressions obtained above:

In [47]:
#integrated vs expected, 1
plot(y1, r1.(y1), lab="integrated")
plot!(y1, F1.(y1), lab="analytical")
title!("f1 Fourier transform: analytical vs integrated")
xlabel!("omega")
ylabel!("real part of Fourier transf.")
Out[47]:
-4 -2 0 2 4 0.1 0.2 0.3 0.4 0.5 0.6 0.7 f1 Fourier transform: analytical vs integrated omega real part of Fourier transf. integrated analytical
In [48]:
#integrated vs expected, 2
plot(y2, r2.(y2), lab="integrated")
plot!(y2, F2.(y2), lab="analytical")
title!("f2 Fourier transform: analytical vs integrated")
xlabel!("omega")
ylabel!("real part of Fourier transf.")
Out[48]:
-0.75 -0.50 -0.25 0.00 0.25 0.50 0.75 0 1 2 3 4 f2 Fourier transform: analytical vs integrated omega real part of Fourier transf. integrated analytical
In [ ]: