Here, we will integrate the initial-value problem (IVP) defined by

$$ \begin{align} \dot x &= x^2 \\ x(0)&=x_0 \end{align} $$Given a real number $A>0$, the restriction of the function $f(x)=x^2$ over the interval $I_A=(-A,A)$ satisfies the Lipschitz condition $|f(x_1)-f(x_2)|=|x_1^2-x_2^2|=|x_1+x_2|\cdot|x_1-x_2| \leq 2A|x_1-x_2|$ for every $x_1,x_2 \in I_A$. Therefore, the Picard-Lindelöf theorem for ordinary differential equations guarantees there exists a $\delta>0$ such that a solution for this IVP exists and is unique for $t \in [0,\delta]$, for any $x_0 \in I_A$. Note that in this case, it is necessary to restrict the function to a bounded interval in order to obtain a Lipschitz condition, which in turn is necessary to fulfill the conditions of the Picard-Lindelöf theorem.

Now, for any initial condition $x_0$, $t_0\in\mathbb{R}$ the analytical solution for this problem is:

$$ x(t)=\frac{x_0}{1-x_0\cdot (t-t_0)} $$In particular, the analytical solution exhibits a divergence at $t^*=t_0+1/x_0$; i.e., the solution is guaranteed to exist only for $t \in [t_0,t_0+1/x_0)$ if $x_0>0$. Otherwise, if $x_0<0$, then the analytical solution "moves away" from the singularity, since in this case we have $t^*=t_0+1/x_0<t_0$ (i.e., if $x_0<0$, the singularity is in the "past"). How does a numerical integrator behave near this divergence, when $x_0>0$?

We will try three methods to integrate this problem:

- Adaptive time-step, 4th-order, Runge-Kutta (
`ODE.jl`

) - Taylor method (
`TaylorIntegration.jl`

) - Adaptive time-step, Runge-Kutta-Fehlberg 7/8 method (
`ODE.jl`

)

As initial conditions to integrate this IVP, we choose $x_0=3$, $t_0=0$, since then the singularity will be at $t^*=1/3$, and this number is not exactly representable in a binary floating-point format. Thus, any constant time-step numerical integrator should break down when integrating up to $t_\mathrm{max}=1/3$, or beyond.

We start off by including the relevant packages:

In [1]:

```
using TaylorIntegration, ODE, Plots, LaTeXStrings
```

The ODE:

In [2]:

```
diffeq(t, x) = x.^2
```

Out[2]:

We select $x_0=3$, $t_0=0$. Then, the singularity is at $t=1/3$.

In [3]:

```
@time tRK, xRK = ode45(diffeq, 3.0, [0.0, 0.34]); #warmup lap
@time tRK, xRK = ode45(diffeq, 3.0, [0.0, 0.34]);
```

Plot $x$ vs $t$ (log-log):

In [4]:

```
pyplot()
plot(log10(tRK[2:end]), log10(xRK[2:end]))
title!("x vs t (log-log)")
xlabel!(L"\log_{10}(t)")
ylabel!(L"\log_{10}(x(t))")
```

Out[4]:

What is the final state of the system?

In [5]:

```
tRK[end], xRK[end]
```

Out[5]:

Does the integrator get past the singularity?

In [6]:

```
tRK[end]>1/3
```

Out[6]:

The answer is yes! So the last value of the solution is meaningless:

In [7]:

```
xRK[end] #this value is meaningless
```

Out[7]:

How many steps did the RK integrator perform?

In [8]:

```
length(xRK)-1
```

Out[8]:

How does the numerical solution compare to the analytical solution? The analytical solution is:

In [9]:

```
exactsol(t, x0) = x0./(1.0-x0.*t) #analytical solution
```

Out[9]:

The relative difference between the numerical and analytical solution, $\delta x$, is:

In [10]:

```
δxRK = (xRK-exactsol(tRK, 3.0))./exactsol(tRK, 3.0) #numerical error, relative to analytical solution
;
```

The $\delta x$ vs $t$ plot (semilog):

In [11]:

```
plot(tRK[2:end], log10(abs(δxRK[2:end])))
title!("Relative error (semi-log)")
xlabel!(L"\log_{10}(t)")
ylabel!(L"\log_{10}(\delta x(t))")
```

Out[11]:

This plot means that the error of the numerical solution grows systematically; and at the end of the integration, the error in the numerical solution is

In [12]:

```
(xRK[end-1]-exactsol(tRK[end-1], 3.0))
```

Out[12]:

Again, we select $x_0=3$, $t_0=0$. The order of the Taylor integration is $28$, and we set the absolute tolerance equal to $10^{-20}$; this value is used during each time-step in order to compute an adaptive step size. We set the maximum number of integration steps equal to the number of steps that the previous integrator did.

In [13]:

```
@time tT, xT = taylorinteg(diffeq, 3.0, 0.0, 0.34, 28, 1e-20, maxsteps=length(xRK)-1); #warmup lap
@time tT, xT = taylorinteg(diffeq, 3.0, 0.0, 0.34, 28, 1e-20, maxsteps=length(xRK)-1);
```

In [14]:

```
tT[end], xT[end]
```

Out[14]:

How many steps did the Taylor integrator perform?

In [15]:

```
length(xT)-1
```

Out[15]:

Below, we show the $x$ vs $t$ plot (log-log):

In [16]:

```
plot(log10(tT[2:end]), log10(xT[2:end]))
title!("x vs t (log-log)")
xlabel!(L"\log_{10}(t)")
ylabel!(L"\log_{10}(x(t))")
```

Out[16]:

Does the integrator get past the singularity?

In [17]:

```
tT[end] > 1/3
```

Out[17]:

The answer is no! Even if we increase the value of the `maxsteps`

keyword in `taylorinteg`

, it doesn't get past the singularity!

Now, the relative difference between the numerical and analytical solution, $\delta x$, is:

In [18]:

```
δxT = (xT.-exactsol(tT, 3.0))./exactsol(tT, 3.0);
```

The $\delta x$ vs $t$ plot (logscale):

In [19]:

```
plot(tT[6:end], log10(abs(δxT[6:end])))
title!("Relative error (semi-log)")
xlabel!(L"t")
ylabel!(L"\log_{10}(\delta x(t))")
```

Out[19]:

We observe that, while the execution time is ~10 times longer wrt 4th-order RK, the numerical solution obtained by the Taylor integrator stays within $10^{-12}$ of the analytical solution, for a same number of steps.

Now, that happens if we use a higher order Runge Kutta method to integrate this problem?

Here we use the Runge-Kutta-Fehlberg 7/8 method, included in `ODE.jl`

, to integrate the same problem as before.

In [20]:

```
@time t78, x78 = ode78(diffeq, 3.0, [0.0, 0.34]); #warmup lap
@time t78, x78 = ode78(diffeq, 3.0, [0.0, 0.34]);
```

Plot $x$ vs $t$ (log-log):

In [21]:

```
plot(log10(t78[2:end]), log10(x78[2:end]))
title!("x vs t (log-log)")
xlabel!(L"\log_{10}(t)")
ylabel!(L"\log_{10}(x(t))")
```

Out[21]:

What is the final state of the system?

In [22]:

```
t78[end], x78[end]
```

Out[22]:

Does the integrator get past the singularity?

In [23]:

```
t78[end]>1/3
```

Out[23]:

The answer is yes! So the last value of the solution is meaningless:

In [24]:

```
x78[end] #this value is meaningless
```

Out[24]:

How many steps did the RK integrator perform?

In [25]:

```
length(x78)-1
```

Out[25]:

The relative difference between the numerical and analytical solution, $\delta x$, is:

In [26]:

```
δx78 = (x78-exactsol(t78, 3.0))./exactsol(t78, 3.0) #error relative to analytical solution
;
```

The $\delta x$ vs $t$ plot (semilog):

In [27]:

```
plot(t78[2:end], log10(abs(δx78[2:end])))
title!("Relative error (semi-log)")
xlabel!(L"t")
ylabel!(L"\log_{10}(\delta x(t))")
```

Out[27]:

This time, the RKF 7/8 integrator is "only" twice as fast as the Taylor integrator, but the error continues to be greater than the error from the latter by several orders of magnitude.

As a last example, we will integrate once again our problem using a 4th-order adaptive RK integrator, but imposing a stringer tolerance:

In [28]:

```
@time tRK_, xRK_ = ode45(diffeq, 3.0, [0.0, 0.34], abstol=1e-8, reltol=1e-8 ); #warmup lap
@time tRK_, xRK_ = ode45(diffeq, 3.0, [0.0, 0.34], abstol=1e-8, reltol=1e-8 );
;
```

Now, the integrator takes 10 times longer to complete the integration than the Taylor method.

Does it get past the singularity?

In [29]:

```
tRK_[end] > 1/3
```

Out[29]:

Yes! So, once again, the last value reported by the integrator is completely meaningless. But, has it attained a higher precision than the Taylor method? Well, let's calculate once again the numerical error relative to the analytical solution:

In [30]:

```
δxRK_ = (xRK_-exactsol(tRK_, 3.0))./exactsol(tRK_, 3.0);
```

And now, let's plot this relative error vs time:

In [31]:

```
plot(tRK_[2:end], log10(abs(δxRK_[2:end])))
title!("Relative error (semi-log)")
xlabel!(L"t")
ylabel!(L"\log_{10}(\delta x(t))")
ylims!(-20,20)
```

Out[31]:

The numerical error has actually gotten worse! `TaylorIntegration.jl`

is indeed a really competitive package to integrate ODEs.

In [ ]:

```
```