The taylor_adaptive
class provides an easy-to-use interface to heyoka.py's
main capabilities. Objects of this class can be constructed from a system
of ODEs and a set of initial conditions (plus a number of optional configuration parameters
with - hopefully - sensible defaults). Methods are provided to
propagate in time the state of the system, either step-by-step or by specifying
time limits.
Let's see how we can use taylor_adaptive
to integrate the ODE
system of the simple pendulum,
with initial conditions
$$ \begin{cases} x\left( 0 \right) = 0.05 \\ v\left( 0 \right) = 0.025 \end{cases} $$import heyoka as hy
# Create the symbolic variables x and v.
x, v = hy.make_vars("x", "v")
# Create the integrator object.
ta = hy.taylor_adaptive(
# Definition of the ODE system:
# x' = v
# v' = -9.8 * sin(x)
sys = [(x, v),
(v, -9.8 * hy.sin(x))],
# Initial conditions for x and v.
state = [0.05, 0.025])
ta
Tolerance : 2.220446049250313e-16 High accuracy : false Compact mode : false Taylor order : 20 Dimension : 2 Time : 0 State : [0.05, 0.025]
After creating the symbolic variables x
and v
, we
construct an instance of taylor_adaptive
called ta
.
By default, taylor_adaptive
operates using double-precision arithmetic. As (mandatory) construction arguments, we pass in the system of differential equations and a set
of initial conditions for x
and v
respectively.
By default, the error tolerance of an adaptive integrator is set to the
machine epsilon, which, for double precision, is $\sim 2.2\times10^{-16}$.
From this value, heyoka.py deduces an optimal Taylor order of 20, as indicated
by the screen output. taylor_adaptive
manages its own copy of the state vector and the time variable.
Both the state vector and the time variable are updated automatically by the timestepping
methods. Note also how, by default, the time variable is initially set to zero.
Let's now try to perform a single integration timestep:
# Perform a single step.
oc, h = ta.step()
# Print the outcome flag and the timestep used.
print("Outcome : {}".format(oc))
print("Timestep: {}".format(h))
Outcome : taylor_outcome.success Timestep: 0.21605277478009474
First, we invoke the step()
method, which returns a pair of values.
The first value is a status flag indicating the outcome of the integration timestep,
while the second value is the step size that was selected by heyoka.py in order
to respect the desired error tolerance.
Let's also print again the integrator object to screen in order to inspect how state and time have changed:
ta
Tolerance : 2.220446049250313e-16 High accuracy : false Compact mode : false Taylor order : 20 Dimension : 2 Time : 0.21605277478009474 State : [0.04399644836992638, -0.07844245547068798]
It is also possible to perform a single timestep backward in time
via the step_backward()
method:
# Perform a step backward.
oc, h = ta.step_backward()
# Print the outcome flag and the timestep used.
print("Outcome : {}".format(oc))
print("Timestep: {}".format(h))
Outcome : taylor_outcome.success Timestep: -0.21312300047513288
The step()
method can also be called with an argument representing
the maximum step size max_delta_t
: if the adaptive timestep
selected by heyoka.py is larger (in absolute value) than max_delta_t
,
then the timestep will be clamped to max_delta_t
:
# Perform a step forward in time, clamping
# the timestep size to 0.01.
oc, h = ta.step(max_delta_t = 0.01)
# Print the outcome flag and the timestep used.
print("Outcome : {}".format(oc))
print("Timestep: {}\n".format(h))
# Perform a step backward in time, clamping
# the timestep size to 0.02.
oc, h = ta.step(max_delta_t = -0.02)
# Print the outcome flag and the timestep used.
print("Outcome : {}".format(oc))
print("Timestep: {}".format(h))
Outcome : taylor_outcome.time_limit Timestep: 0.01 Outcome : taylor_outcome.time_limit Timestep: -0.02
Note that the integration outcome is now time_limit
, instead of success
.
Before moving on, we need to point out an important caveat when using the single step functions:
{warning}
If the exact solution of the ODE system can be expressed as a polynomial function
of time, the automatic timestep deduction algorithm may return a timestep of infinity.
This is the case, for instance, when integrating the rectilinear motion of a free
particle or the constant-gravity free-fall equation. In such cases, the step functions
should be called with a finite ``max_delta_t`` argument, in order to clamp the timestep
to a finite value.
Note that the ``propagate_*()`` functions (described {ref}`below<time_limited_prop>`)
are not affected by this issue.
It is possible to read from and write to both the time variable and the state vector:
# Print the current time.
print("Current time : {}".format(ta.time))
# Print out the current state vector.
print("Current state vector: {}\n".format(ta.state))
# Reset the time and state to the initial values.
ta.time = 0.
ta.state[:] = [0.05, 0.025]
# Print them again.
print("Current time : {}".format(ta.time))
print("Current state vector: {}".format(ta.state))
Current time : -0.007070225695038143 Current state vector: [0.04981102 0.02845657] Current time : 0.0 Current state vector: [0.05 0.025]
Note that the time is stored as a scalar value, while the state is stored as a NumPy array.
Because of technical reasons related to the management of the lifetime of arrays when interacting with the underlying heyoka C++ library, it is not possible to directly set the state via the syntax
>>> ta.state = [0.05, 0.025] # Won't work!
Thus, the array assignment syntax ta.state[:] = ...
must be used instead. Similarly, it is also possible to set directly the values of the components of the array:
ta.state[0] = 0.05
ta.state[1] = 0.025
(time_limited_prop)=
In addition to the step-by-step integration methods,
taylor_adaptive
also provides methods to propagate
the state of the system for a specified amount of time.
These methods are called propagate_for()
and
propagate_until()
: the former integrates
the system for a specified amount of time, the latter
propagates the state up to a specified epoch.
# Propagate for 5 time units.
status, min_h, max_h, nsteps, _ = ta.propagate_for(delta_t = 5.)
print("Outcome : {}".format(status))
print("Min. timestep: {}".format(min_h))
print("Max. timestep: {}".format(max_h))
print("Num. of steps: {}".format(nsteps))
print("Current time : {}\n".format(ta.time))
# Propagate until t = 20.
status, min_h, max_h, nsteps, _ = ta.propagate_until(t = 20.)
print("Outcome : {}".format(status))
print("Min. timestep: {}".format(min_h))
print("Max. timestep: {}".format(max_h))
print("Num. of steps: {}".format(nsteps))
print("Current time : {}".format(ta.time))
Outcome : taylor_outcome.time_limit Min. timestep: 0.20213323505293765 Max. timestep: 0.21813566576411725 Num. of steps: 24 Current time : 5.0 Outcome : taylor_outcome.time_limit Min. timestep: 0.20212172864807665 Max. timestep: 0.2181392923080563 Num. of steps: 72 Current time : 20.0
The time-limited propagation methods return a tuple of 5 values, which represent, respectively:
time_limit
, unless error conditions arise),The time-limited propagation methods can be used to propagate both forward and backward in time:
# Propagate back to t = 0.
status, min_h, max_h, nsteps, _ = ta.propagate_until(t = 0.)
print("Outcome : {}".format(status))
print("Min. timestep: {}".format(min_h))
print("Max. timestep: {}".format(max_h))
print("Num. of steps: {}".format(nsteps))
print("Current time : {}\n".format(ta.time))
print(ta)
Outcome : taylor_outcome.time_limit Min. timestep: 0.20207792808238695 Max. timestep: 0.21818982934810394 Num. of steps: 97 Current time : 0.0 Tolerance : 2.220446049250313e-16 High accuracy : false Compact mode : false Taylor order : 20 Dimension : 2 Time : 0 State : [0.050000000000000044, 0.02499999999999999]
Note also that the time-limited propagation methods will stop
integrating if a non-finite value is detected in the state vector
at the end of the timestep. In such case, the outcome of the
integration will be err_nf_state
.
The propagate_for()
and propagate_until()
methods
can be invoked with additional optional keyword arguments:
max_delta_t
: similarly to the step()
function, this value
represents the maximum timestep size in absolute value;callback
: this is a callable which will be invoked at the end of
each timestep, with the integrator object as only argument. If the callback returns True
then the integration
will continue after the invocation of the callback, otherwise the integration
will be interrupted;c_output
: a boolean flag that enables [continuous output](<./Dense output.ipynb>).# Propagate to t = .5 using a max_delta_t and
# providing a callback that prints the current time.
# The callback.
def cb(ta):
print("Current time: {}".format(ta.time))
return True
ta.propagate_until(t = .5, max_delta_t = .1, callback = cb);
Current time: 0.1 Current time: 0.2 Current time: 0.30000000000000004 Current time: 0.4 Current time: 0.5
Optionally, callbacks can implement a pre_hook()
method that will be invoked
once before the first step is taken by the propagate_for()
and propagate_until()
methods:
# Callback with pre_hook().
class cb:
def __call__(self, ta):
print("Current time: {}".format(ta.time))
return True
def pre_hook(self, ta):
print("pre_hook() invoked!")
ta.propagate_until(t = 1.5, max_delta_t = .1, callback = cb());
pre_hook() invoked! Current time: 0.6 Current time: 0.7 Current time: 0.8 Current time: 0.9 Current time: 1.0 Current time: 1.1 Current time: 1.2 Current time: 1.3 Current time: 1.4000000000000001 Current time: 1.5
Another way of propagating the state of a system in a taylor_adaptive
integrator is over a time grid. In this mode, the integrator
uses [dense output](<./Dense output.ipynb>) to compute the state of the system
over a grid of time coordinates provided by the user. If the grid is denser
than the typical timestep size, this can be noticeably more efficient than
repeatedly calling propagate_until()
on the grid points, because
propagating the system state via dense output is much faster than taking
a full integration step.
Let's see a simple usage example:
# Reset the time and state to the initial values.
ta.time = 0.
ta.state[:] = [0.05, 0.025]
# Propagate over a time grid from 0 to 1
# at regular intervals.
out = ta.propagate_grid(grid = [.1, .2, .3, .4, .5, .6, .7, .8, .9, 1.])
out
(<taylor_outcome.time_limit: -4294967299>, 0.20291845444801257, 0.216140019757225, 5, array([[ 0.05003035, -0.024398 ], [ 0.04519961, -0.07142727], [ 0.03597685, -0.11152037], [ 0.02325783, -0.14078016], [ 0.00827833, -0.15635952], [-0.00750582, -0.15674117], [-0.02256041, -0.14188793], [-0.03542229, -0.11324639], [-0.04484178, -0.07360369], [-0.04990399, -0.02681336]]))
propagate_grid()
takes in input a grid of time points,
and returns a tuple of 5 values. The first 4 values are the same
as in the other propagate_*()
functions:
The fifth value returned by propagate_grid()
is a 2D array containing
the state of the system at the time points in the grid:
# Print the state at t = 0.4 (index 3 in the time grid).
print("State at t = 0.4: {}".format(out[4][3]))
State at t = 0.4: [ 0.02325783 -0.14078016]
As you can see from the screen output above (where we printed the out
variable), the use of propagate_grid()
resulted in 5 integration timesteps being taken. Had we used propagate_until()
, we would have needed 10 integration timesteps to obtain the same result.
There are no special requirements on the time values in the grid (apart from the fact that they must be finite and ordered monotonically).
The propagate_grid()
method
can be invoked with additional optional keyword arguments:
max_delta_t
: similarly to the step()
function, this value
represents the maximum timestep size in absolute value;callback
: this is a callable which will be invoked at the end of
each timestep, with the integrator object as only argument. If the callback returns True
then the integration
will continue after the invocation of the callback, otherwise the integration
will be interrupted;c_output
: a boolean flag that enables [continuous output](<./Dense output.ipynb>).# Propagate over a grid using a max_delta_t and
# providing a callback that prints the current time.
# The callback.
def cb(ta):
print("Current time: {}".format(ta.time))
return True
ta.propagate_grid(grid = [1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.],
max_delta_t = .1, callback = cb);
Current time: 1.2000000000000002 Current time: 1.3 Current time: 1.4000000000000001 Current time: 1.5 Current time: 1.6 Current time: 1.7000000000000002 Current time: 1.8 Current time: 1.9000000000000001 Current time: 2.0
Optionally, callbacks can implement a pre_hook()
method that will be invoked
once before the first step is taken by the propagate_grid()
method:
# Callback with pre_hook().
class cb:
def __call__(self, ta):
print("Current time: {}".format(ta.time))
return True
def pre_hook(self, ta):
print("pre_hook() invoked!")
ta.propagate_grid(grid = [1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2.],
max_delta_t = .1, callback = cb());
pre_hook() invoked! Current time: 1.2000000000000002 Current time: 1.3 Current time: 1.4000000000000001 Current time: 1.5 Current time: 1.6 Current time: 1.7000000000000002 Current time: 1.8 Current time: 1.9000000000000001 Current time: 2.0