#!/usr/bin/env python # coding: utf-8 # ## Different Optimisers for SPMe Parameter Estimation # # In this notebook, we demonstrate parameter estimation for a single-particle model for various PyBOP optimisers. PyBOP offers a variety of gradient and non-gradient based optimisers, with a table of the currently supported methods shown in the Readme. In this example, we will set up the model, problem, and cost function and investigate how the different optimisers perform under this task. # # ### Setting up the Environment # # Before we begin, we need to ensure that we have all the necessary tools. We will install PyBOP and upgrade dependencies: # In[ ]: get_ipython().run_line_magic('pip', 'install --upgrade pip ipywidgets -q') get_ipython().run_line_magic('pip', 'install pybop -q') # ### Importing Libraries # # With the environment set up, we can now import PyBOP alongside other libraries we will need: # In[ ]: import numpy as np import pybop pybop.plot.PlotlyManager().pio.renderers.default = "notebook_connected" # Let's fix the random seed in order to generate consistent output during development, although this does not need to be done in practice. # In[ ]: np.random.seed(8) # ## Generating Synthetic Data # # To demonstrate the parameter estimation, we first need some data. We will generate synthetic data using a PyBOP DFN forward model, which requires defining a parameter set and the model itself. # # ### Defining Parameters and Model # # We start by creating an example parameter set, constructing the DFN for synthetic generation, and the model we will be fitting (SPMe). # In[ ]: parameter_set = pybop.ParameterSet.pybamm("Chen2020") synth_model = pybop.lithium_ion.DFN(parameter_set=parameter_set) model = pybop.lithium_ion.SPMe(parameter_set=parameter_set) # ### Simulating the Forward Model # # We can then simulate the model using the `predict` method, with a default constant current discharge to generate the voltage data. # In[ ]: t_eval = np.arange(0, 2000, 10) initial_state = {"Initial SoC": 1.0} values = synth_model.predict(t_eval=t_eval, initial_state=initial_state) # ### Adding Noise to Voltage Data # # To make the parameter estimation more realistic, we add Gaussian noise to the data. # In[ ]: sigma = 0.002 corrupt_values = values["Voltage [V]"].data + np.random.normal(0, sigma, len(t_eval)) # ## Identifying the Parameters # We will now set up the parameter estimation process by defining the datasets for optimisation and selecting the model parameters we wish to estimate. # ### Creating a Dataset # # The dataset for optimisation is composed of time, current, and the noisy voltage data: # In[ ]: dataset = pybop.Dataset( { "Time [s]": t_eval, "Current function [A]": values["Current [A]"].data, "Voltage [V]": corrupt_values, } ) # ### Defining Parameters to Estimate # # We select the parameters for estimation and set up their prior distributions and bounds. In this example, non-geometric parameters for each electrode's active material volume fraction are selected. # In[ ]: parameters = pybop.Parameters( pybop.Parameter( "Negative electrode active material volume fraction", prior=pybop.Gaussian(0.6, 0.02), bounds=[0.5, 0.8], ), pybop.Parameter( "Positive electrode active material volume fraction", prior=pybop.Gaussian(0.48, 0.02), bounds=[0.4, 0.7], ), ) # ### Selecting the Optimisers # # Now, we can select the optimisers to investigate. The first object is a list of non-gradient-based PINTS's optimisers. The next object comprises the gradient-based PINTS's optimisers (AdamW, GradientDescent, IRPropMin). The final object forms the SciPy optimisers which can have gradient and non-gradient-based algorithms. # In[ ]: gradient_optimisers = [ pybop.AdamW, pybop.GradientDescent, pybop.IRPropMin, ] non_gradient_optimisers = [ pybop.CMAES, pybop.SNES, pybop.PSO, pybop.XNES, pybop.NelderMead, pybop.CuckooSearch, ] scipy_optimisers = [ pybop.SciPyMinimize, pybop.SciPyDifferentialEvolution, ] # ### Setting up the Optimisation Problem # # With the datasets, parameters, and optimisers defined, we can set up the optimisation problem and cost function. In this example we loop through all of the above optimisers and store the results for later visualisation and analysis. # In[ ]: optims = [] xs = [] model.set_initial_state(initial_state) problem = pybop.FittingProblem(model, parameters, dataset) cost = pybop.SumSquaredError(problem) for optimiser in gradient_optimisers: print(f"Running {optimiser.__name__}") sigma0 = 0.01 if optimiser is pybop.GradientDescent else None optim = optimiser( cost, sigma0=sigma0, max_unchanged_iterations=20, max_iterations=60 ) results = optim.run() optims.append(optim) xs.append(results.x) # In[ ]: for optimiser in non_gradient_optimisers: print(f"Running {optimiser.__name__}") optim = optimiser(cost, max_unchanged_iterations=20, max_iterations=60) results = optim.run() optims.append(optim) xs.append(results.x) # In[ ]: for optimiser in scipy_optimisers: print(f"Running {optimiser.__name__}") optim = optimiser(cost, max_iterations=60) results = optim.run() optims.append(optim) xs.append(results.x) # Next, we can compare the identified parameters across the optimisers. This gives us insight into how well each optimiser traversed the cost landscape. The ground-truth parameter values for the `Chen2020` parameter set are: # # - Negative active material volume fraction: `0.75` # - Positive active material volume fraction: `0.665` # In[ ]: for optim in optims: print(f"| Optimiser: {optim.name()} | Results: {optim.result.x} |") # Many of the above optimisers found the correct value for the positive active material volume fraction. However, none of them found the correct value for the negative electrode. Next, we can investigate if this was an optimiser or parameter observability failure. # ## Plotting and Visualisation # # PyBOP provides various plotting utilities to visualise the results of the optimisation. # ### Comparing Solutions # # We can quickly plot the system's response using the estimated parameters for each optimiser and the target dataset. # In[ ]: for optim, x in zip(optims, xs): pybop.plot.quick(optim.cost.problem, problem_inputs=x, title=optim.name()) # ### Convergence and Parameter Trajectories # # To assess the optimisation process, we can plot the convergence of the cost function and the trajectories of the parameters: # In[ ]: for optim in optims: pybop.plot.convergence(optim, title=optim.name()) pybop.plot.parameters(optim) # ### Cost Landscape # # Finally, we can visualise the cost landscape and the path taken by the optimiser. This should give us additional insight into whether the negative electrode volume fraction is observable or not. For an observable parameter, the cost landscape needs to have a clear minimum with respect to the parameter in question. More clearly, the parameter value has to have an effect on the cost function. # In[ ]: # Plot the cost landscape with optimisation path and updated bounds bounds = np.asarray([[0.5, 0.8], [0.55, 0.8]]) for optim in optims: pybop.plot.surface(optim, bounds=bounds, title=optim.name()) # Given the synthetic data and corresponding system excitation, the observability of the negative electrode active material fraction is quite low. As such, we would need to excite the system in a different way or observe a different signal to acquire a unique value. # ### Conclusion # # This notebook illustrates how to perform parameter estimation using PyBOP, across both gradient and non-gradient-based optimisers.