This notebook shows how to compute the optimal control (OC) signal for the noisy WC model for a simple example task.
import matplotlib.pyplot as plt
import numpy as np
import os
while os.getcwd().split(os.sep)[-1] != "neurolib":
os.chdir('..')
# We import the model, stimuli, and the optimal control package
from neurolib.models.wc import WCModel
from neurolib.utils.stimulus import ZeroInput
from neurolib.control.optimal_control import oc_wc
from neurolib.utils.plot_oc import plot_oc_singlenode, plot_oc_network
# This will reload all imports as soon as the code changes
%load_ext autoreload
%autoreload 2
We stimulate the system with a known control signal, define the resulting activity as target, and compute the optimal control for this target. We define weights such that precision is penalized only (w_p=1, w_2=0). Hence, the optimal control signal should converge to the input signal.
# We import the model
model = WCModel()
# Set noise strength to zero to define target state
model.params.sigma_ou = 0.
# Some parameters to define stimulation signals
dt = model.params["dt"]
duration = 40.
amplitude = 1.
period = duration / 4.
# We define a "zero-input", and a sine-input
zero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)
input = np.copy(zero_input)
input[0,1:-1] = amplitude * np.sin(2.*np.pi*np.arange(0,duration-0.1, dt)/period) # other functions or random values can be used as well
# We set the duration of the simulation and the initial values
model.params["duration"] = duration
x_init = 0.011225367461896877
y_init = 0.013126741089502588
model.params["exc_init"] = np.array([[x_init]])
model.params["inh_init"] = np.array([[y_init]])
# We set the stimulus in x and y variables, and run the simulation
model.params["exc_ext"] = input
model.params["inh_ext"] = zero_input
model.run()
# Define the result of the stimulation as target
target = np.concatenate((np.concatenate( (model.params["exc_init"], model.params["inh_init"]), axis=1)[:,:, np.newaxis],
np.stack( (model.exc, model.inh), axis=1)), axis=2)
target_input = np.concatenate( (input,zero_input), axis=0)[np.newaxis,:,:]
# Remove stimuli and re-run the simulation
# Change sigma_ou_parameter to adjust the noise strength
model.params['sigma_ou'] = 0.1
model.params['tau_ou'] = 1.
model.params["exc_ext"] = zero_input
model.params["inh_ext"] = zero_input
control = np.concatenate( (zero_input,zero_input), axis=0)[np.newaxis,:,:]
model.run()
# combine initial value and simulation result to one array
state = np.concatenate((np.concatenate( (model.params["exc_init"], model.params["inh_init"]), axis=1)[:,:, np.newaxis],
np.stack( (model.exc, model.inh), axis=1)), axis=2)
plot_oc_singlenode(duration, dt, state, target, control, target_input)
The target is a periodic oscillation of x and y variable (computed in deterministic, noise-free system).
The noisy, undistrubed system fluctuates around zero.
For the optimization, you can now set several new parameters:
Please note:
Let's first optimize with the following parameters: M=20, iterations=100
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
model.params["exc_ext"] = zero_input
model.params["inh_ext"] = zero_input
# We load the optimal control class
# print array (optional parameter) defines, for which iterations intermediate results will be printed
# Parameters will be taken from the input model
model_controlled = oc_wc.OcWc(model, target, print_array=np.arange(0,101,10),
M=20, M_validation=500, validate_per_step=True)
# We run 100 iterations of the optimal control gradient descent algorithm
model_controlled.optimize(100)
state = model_controlled.get_xs()
control = model_controlled.control
plot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)
Compute control for a noisy system Mean cost in iteration 0: 0.0486299027821106 Mean cost in iteration 10: 0.02795683316984877 Mean cost in iteration 20: 0.027101411958439722 Mean cost in iteration 30: 0.026543919519260453 Mean cost in iteration 40: 0.026707819124178123 Mean cost in iteration 50: 0.026786489900410732 Mean cost in iteration 60: 0.026412584686262147 Mean cost in iteration 70: 0.026425089398826186 Mean cost in iteration 80: 0.026760368474147204 Mean cost in iteration 90: 0.026954163211574594 Mean cost in iteration 100: 0.027106734179733114 Minimal cost found at iteration 36 Final cost validated with 500 noise realizations : 0.02719992592343364
Let's do the same thing with different parameters: M=100, iterations=30
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
model.params["exc_ext"] = zero_input
model.params["inh_ext"] = zero_input
# We load the optimal control class
# print array (optional parameter) defines, for which iterations intermediate results will be printed
# Parameters will be taken from the input model
model_controlled = oc_wc.OcWc(model, target,print_array=np.arange(0,31,5),
M=100, M_validation=500, validate_per_step=True)
# We run 30 iterations of the optimal control gradient descent algorithm
model_controlled.optimize(30)
state = model_controlled.get_xs()
control = model_controlled.control
plot_oc_singlenode(duration, dt, state, target, control, target_input, model_controlled.cost_history)
Compute control for a noisy system Mean cost in iteration 0: 0.044519683319845585 Mean cost in iteration 5: 0.049139417017223554 Mean cost in iteration 10: 0.050857609671347954 Mean cost in iteration 15: 0.04663531486878592 Mean cost in iteration 20: 0.046747345271133535 Mean cost in iteration 25: 0.05112611753258763 Mean cost in iteration 30: 0.04785865829049892 Minimal cost found at iteration 27 Final cost validated with 500 noise realizations : 0.045416281905513174
Let us know study a simple 2-node network of model oscillators. We first need to define the coupling matrix and the delay matrix. We can then initialize the model.
cmat = np.array( [[0., 0.5], [1.0, 0.]] ) # diagonal elements are zero, connection strength is 1 (0.5) from node 0 to node 1 (from node 1 to node 0)
dmat = np.array( [[0., 0.], [0., 0.]] ) # no delay
model = WCModel(Cmat=cmat, Dmat=dmat)
# we define the control input matrix to enable or disable certain channels and nodes
control_mat = np.zeros( (model.params.N, len(model.state_vars)) )
control_mat[0,0] = 1. # only allow inputs in x-channel in node 0
model.params.K_gl = 10.
# Set noise strength to zero to define target state
model.params['sigma_ou'] = 0.
model.params["duration"] = duration
zero_input = ZeroInput().generate_input(duration=duration+dt, dt=dt)
input = np.copy(zero_input)
input[0,1:-1] = amplitude * np.sin(2.*np.pi*np.arange(0,duration-0.1, dt)/period) # other functions or random values can be used as well
model.params["exc_init"] = np.vstack( [0.01255381969006173, 0.01190300495001282] )
model.params["inh_init"] = np.vstack( [0.013492631513639169, 0.013312224583806076] )
# We set the stimulus in x and y variables, and run the simulation
input_nw = np.concatenate( (np.vstack( [control_mat[0,0] * input, control_mat[0,1] * input] )[np.newaxis,:,:],
np.vstack( [control_mat[1,0] * input, control_mat[1,1] * input] )[np.newaxis,:,:]), axis=0)
zero_input_nw = np.concatenate( (np.vstack( [zero_input, zero_input] )[np.newaxis,:,:],
np.vstack( [zero_input, zero_input] )[np.newaxis,:,:]), axis=0)
model.params["exc_ext"] = input_nw[:,0,:]
model.params["inh_ext"] = input_nw[:,1,:]
model.run()
# Define the result of the stimulation as target
target = np.concatenate( (np.concatenate( (model.params["exc_init"], model.params["inh_init"]), axis=1)[:,:, np.newaxis], np.stack( (model.exc, model.inh), axis=1)), axis=2)
# Remove stimuli and re-run the simulation
model.params['sigma_ou'] = 0.03
model.params['tau_ou'] = 1.
model.params["exc_ext"] = zero_input_nw[:,0,:]
model.params["inh_ext"] = zero_input_nw[:,0,:]
model.run()
# combine initial value and simulation result to one array
state = np.concatenate( (np.concatenate( (model.params["exc_init"], model.params["inh_init"]), axis=1)[:,:, np.newaxis], np.stack( (model.exc, model.inh), axis=1)), axis=2)
plot_oc_network(model.params.N, duration, dt, state, target, zero_input_nw, input_nw)
Let's optimize with the following parameters: M=20, iterations=100
# we define the precision matrix to specify, in which nodes and channels we measure deviations from the target
cost_mat = np.zeros( (model.params.N, len(model.output_vars)) )
cost_mat[1,0] = 1. # only measure in x-channel in node 1
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
model.params["exc_ext"] = zero_input_nw[:,0,:]
model.params["inh_ext"] = zero_input_nw[:,0,:]
# We load the optimal control class
# print array (optinal parameter) defines, for which iterations intermediate results will be printed
# Parameters will be taken from the input model
model_controlled = oc_wc.OcWc(model,
target,
print_array=np.arange(0,101,10),
control_matrix=control_mat,
cost_matrix=cost_mat,
M=20,
M_validation=500,
validate_per_step=True)
# We run 100 iterations of the optimal control gradient descent algorithm
model_controlled.optimize(100)
state = model_controlled.get_xs()
control = model_controlled.control
plot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)
Compute control for a noisy system Mean cost in iteration 0: 0.0161042019653286 Mean cost in iteration 10: 0.029701202083900886 Mean cost in iteration 20: 0.02055100392146934 Mean cost in iteration 30: 0.01824138412316584 Mean cost in iteration 40: 0.01774943248604246 Mean cost in iteration 50: 0.00938616563892467 Mean cost in iteration 60: 0.013815979179667275 Mean cost in iteration 70: 0.011677029951767951 Mean cost in iteration 80: 0.03103645422939053 Mean cost in iteration 90: 0.018355469642118635 Mean cost in iteration 100: 0.021407393453975455 Minimal cost found at iteration 67 Final cost validated with 500 noise realizations : 0.02038125379192151
Let's do the same thing with different parameters: M=100, iterations=30
# We set the external stimulation to zero. This is the "initial guess" for the OC algorithm
model.params["exc_ext"] = zero_input_nw[:,0,:]
model.params["inh_ext"] = zero_input_nw[:,0,:]
# We load the optimal control class
# print array (optinal parameter) defines, for which iterations intermediate results will be printed
# Parameters will be taken from the input model
model_controlled = oc_wc.OcWc(model,
target,
print_array=np.arange(0,31,5),
control_matrix=control_mat,
cost_matrix=cost_mat,
M=100,
M_validation=500,
validate_per_step=True)
# We run 30 iterations of the optimal control gradient descent algorithm
model_controlled.optimize(30)
state = model_controlled.get_xs()
control = model_controlled.control
plot_oc_network(model.params.N, duration, dt, state, target, control, input_nw, model_controlled.cost_history, model_controlled.step_sizes_history)
Compute control for a noisy system Mean cost in iteration 0: 0.01775755329403377 Mean cost in iteration 5: 0.010280452998278504 Mean cost in iteration 10: 0.01594708289308906 Mean cost in iteration 15: 0.028644745813145765 Mean cost in iteration 20: 0.030889247442364865 Mean cost in iteration 25: 0.02629869930972565 Mean cost in iteration 30: 0.017322464091192105 Minimal cost found at iteration 21 Final cost validated with 500 noise realizations : 0.04481574197020663