In this notebook, we demonstrate parameter estimation for a single-particle model for various PyBOP optimisers. PyBOP offers a variety of gradient and non-gradient based optimisers, with a table of the currently supported methods shown in the Readme. In this example, we will set up the model, problem, and cost function and investigate how the different optimisers perform under this task.
Before we begin, we need to ensure that we have all the necessary tools. We will install PyBOP and upgrade dependencies:
%pip install --upgrade pip ipywidgets -q
%pip install pybop -q
/Users/engs2510/Documents/Git/Second_PyBOP/.nox/notebooks-overwrite/bin/python3: No module named pip Note: you may need to restart the kernel to use updated packages. /Users/engs2510/Documents/Git/Second_PyBOP/.nox/notebooks-overwrite/bin/python3: No module named pip Note: you may need to restart the kernel to use updated packages.
With the environment set up, we can now import PyBOP alongside other libraries we will need:
import numpy as np
import pybop
pybop.plot.PlotlyManager().pio.renderers.default = "notebook_connected"
Let's fix the random seed in order to generate consistent output during development, although this does not need to be done in practice.
np.random.seed(8)
To demonstrate the parameter estimation, we first need some data. We will generate synthetic data using a PyBOP DFN forward model, which requires defining a parameter set and the model itself.
We start by creating an example parameter set, constructing the DFN for synthetic generation, and the model we will be fitting (SPMe).
parameter_set = pybop.ParameterSet.pybamm("Chen2020")
synth_model = pybop.lithium_ion.DFN(parameter_set=parameter_set)
model = pybop.lithium_ion.SPMe(parameter_set=parameter_set)
We can then simulate the model using the predict
method, with a default constant current discharge to generate the voltage data.
t_eval = np.arange(0, 2000, 10)
initial_state = {"Initial SoC": 1.0}
values = synth_model.predict(t_eval=t_eval, initial_state=initial_state)
To make the parameter estimation more realistic, we add Gaussian noise to the data.
sigma = 0.002
corrupt_values = values["Voltage [V]"].data + np.random.normal(0, sigma, len(t_eval))
We will now set up the parameter estimation process by defining the datasets for optimisation and selecting the model parameters we wish to estimate.
The dataset for optimisation is composed of time, current, and the noisy voltage data:
dataset = pybop.Dataset(
{
"Time [s]": t_eval,
"Current function [A]": values["Current [A]"].data,
"Voltage [V]": corrupt_values,
}
)
We select the parameters for estimation and set up their prior distributions and bounds. In this example, non-geometric parameters for each electrode's active material volume fraction are selected.
parameters = pybop.Parameters(
pybop.Parameter(
"Negative electrode active material volume fraction",
prior=pybop.Gaussian(0.6, 0.02),
bounds=[0.5, 0.8],
),
pybop.Parameter(
"Positive electrode active material volume fraction",
prior=pybop.Gaussian(0.48, 0.02),
bounds=[0.4, 0.7],
),
)
Now, we can select the optimisers to investigate. The first object is a list of non-gradient-based PINTS's optimisers. The next object comprises the gradient-based PINTS's optimisers (AdamW, GradientDescent, IRPropMin). The final object forms the SciPy optimisers which can have gradient and non-gradient-based algorithms.
gradient_optimisers = [
pybop.AdamW,
pybop.GradientDescent,
pybop.IRPropMin,
]
non_gradient_optimisers = [
pybop.CMAES,
pybop.SNES,
pybop.PSO,
pybop.XNES,
pybop.NelderMead,
pybop.CuckooSearch,
]
scipy_optimisers = [
pybop.SciPyMinimize,
pybop.SciPyDifferentialEvolution,
]
With the datasets, parameters, and optimisers defined, we can set up the optimisation problem and cost function. In this example we loop through all of the above optimisers and store the results for later visualisation and analysis.
optims = []
xs = []
model.set_initial_state(initial_state)
problem = pybop.FittingProblem(model, parameters, dataset)
cost = pybop.SumSquaredError(problem)
for optimiser in gradient_optimisers:
print(f"Running {optimiser.__name__}")
sigma0 = 0.01 if optimiser is pybop.GradientDescent else None
optim = optimiser(
cost, sigma0=sigma0, max_unchanged_iterations=20, max_iterations=60
)
results = optim.run()
optims.append(optim)
xs.append(results.x)
Running AdamW NOTE: Boundaries ignored by AdamW Halt: Maximum number of iterations (60) reached. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.72077396 0.674774 ] Final cost: 0.003046118194519812 Optimisation time: 3.55043888092041 seconds Number of iterations: 60 SciPy result available: No Running GradientDescent NOTE: Boundaries ignored by <class 'pints._optimisers._gradient_descent.GradientDescent'> Halt: Maximum number of iterations (60) reached. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.67939892 0.6815674 ] Final cost: 0.0038166067064497105 Optimisation time: 3.2792601585388184 seconds Number of iterations: 60 SciPy result available: No Running IRPropMin Halt: Maximum number of iterations (60) reached. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.72167189 0.67300586] Final cost: 0.0029223029353733646 Optimisation time: 3.3285961151123047 seconds Number of iterations: 60 SciPy result available: No
for optimiser in non_gradient_optimisers:
print(f"Running {optimiser.__name__}")
optim = optimiser(cost, max_unchanged_iterations=20, max_iterations=60)
results = optim.run()
optims.append(optim)
xs.append(results.x)
Running CMAES Halt: No significant change for 20 iterations. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.72099642 0.673128 ] Final cost: 0.0029219755355584616 Optimisation time: 2.9584238529205322 seconds Number of iterations: 42 SciPy result available: No Running SNES Halt: No significant change for 20 iterations. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.72125026 0.67307644] Final cost: 0.0029220074793387843 Optimisation time: 2.7991578578948975 seconds Number of iterations: 52 SciPy result available: No Running PSO Halt: No significant change for 20 iterations. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.7335151 0.67158104] Final cost: 0.0030254785798547734 Optimisation time: 2.0981178283691406 seconds Number of iterations: 43 SciPy result available: No Running XNES Halt: Maximum number of iterations (60) reached. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.67677389 0.68423077] Final cost: 0.004170543052960678 Optimisation time: 3.208811044692993 seconds Number of iterations: 60 SciPy result available: No Running NelderMead NOTE: Boundaries ignored by <class 'pints._optimisers._nelder_mead.NelderMead'> Halt: Maximum number of iterations (60) reached. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.72127038 0.67308243] Final cost: 0.002922015000486046 Optimisation time: 0.572443962097168 seconds Number of iterations: 60 SciPy result available: No Running CuckooSearch Halt: No significant change for 20 iterations. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.71228325 0.67482605] Final cost: 0.0029570571571777425 Optimisation time: 2.216571807861328 seconds Number of iterations: 40 SciPy result available: No
for optimiser in scipy_optimisers:
print(f"Running {optimiser.__name__}")
optim = optimiser(cost, max_iterations=60)
results = optim.run()
optims.append(optim)
xs.append(results.x)
Running SciPyMinimize OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.62747952 0.7 ] Final cost: 0.010270031808476705 Optimisation time: 0.4111368656158447 seconds Number of iterations: 23 SciPy result available: Yes Running SciPyDifferentialEvolution Ignoring x0. Initial conditions are not used for differential_evolution. OptimisationResult: Initial parameters: [0.61271354 0.47586173] Optimised parameters: [0.72099808 0.67312761] Final cost: 0.0029219755546384587 Optimisation time: 5.829385042190552 seconds Number of iterations: 20 SciPy result available: Yes
Next, we can compare the identified parameters across the optimisers. This gives us insight into how well each optimiser traversed the cost landscape. The ground-truth parameter values for the Chen2020
parameter set are:
0.75
0.665
for optim in optims:
print(f"| Optimiser: {optim.name()} | Results: {optim.result.x} |")
| Optimiser: AdamW | Results: [0.72077396 0.674774 ] | | Optimiser: Gradient descent | Results: [0.67939892 0.6815674 ] | | Optimiser: iRprop- | Results: [0.72167189 0.67300586] | | Optimiser: Covariance Matrix Adaptation Evolution Strategy (CMA-ES) | Results: [0.72099642 0.673128 ] | | Optimiser: Seperable Natural Evolution Strategy (SNES) | Results: [0.72125026 0.67307644] | | Optimiser: Particle Swarm Optimisation (PSO) | Results: [0.7335151 0.67158104] | | Optimiser: Exponential Natural Evolution Strategy (xNES) | Results: [0.67677389 0.68423077] | | Optimiser: Nelder-Mead | Results: [0.72127038 0.67308243] | | Optimiser: Cuckoo Search | Results: [0.71228325 0.67482605] | | Optimiser: SciPyMinimize | Results: [0.62747952 0.7 ] | | Optimiser: SciPyDifferentialEvolution | Results: [0.72099808 0.67312761] |
Many of the above optimisers found the correct value for the positive active material volume fraction. However, none of them found the correct value for the negative electrode. Next, we can investigate if this was an optimiser or parameter observability failure.
PyBOP provides various plotting utilities to visualise the results of the optimisation.
We can quickly plot the system's response using the estimated parameters for each optimiser and the target dataset.
for optim, x in zip(optims, xs):
pybop.plot.quick(optim.cost.problem, problem_inputs=x, title=optim.name())
To assess the optimisation process, we can plot the convergence of the cost function and the trajectories of the parameters:
for optim in optims:
pybop.plot.convergence(optim, title=optim.name())
pybop.plot.parameters(optim)
Finally, we can visualise the cost landscape and the path taken by the optimiser. This should give us additional insight into whether the negative electrode volume fraction is observable or not. For an observable parameter, the cost landscape needs to have a clear minimum with respect to the parameter in question. More clearly, the parameter value has to have an effect on the cost function.
# Plot the cost landscape with optimisation path and updated bounds
bounds = np.asarray([[0.5, 0.8], [0.55, 0.8]])
for optim in optims:
pybop.plot.surface(optim, bounds=bounds, title=optim.name())
Given the synthetic data and corresponding system excitation, the observability of the negative electrode active material fraction is quite low. As such, we would need to excite the system in a different way or observe a different signal to acquire a unique value.
This notebook illustrates how to perform parameter estimation using PyBOP, across both gradient and non-gradient-based optimisers.