To run the code below:
SHIFT+ENTER
on your keyboard or press the play button
() in the toolbar above.Feel free to create new cells using the plus button
(), or pressing SHIFT+ENTER
while this cell
is selected.
In this example, we show how Brian can be used to do a parameter exploration. The most efficient way to implement this is to consider a group of neurons, where each of the neurons is described by the same model, but one or several model parameters systematically change across neurons.
%matplotlib notebook
from brian2 import *
We can speed up this "embarassingly parallel" problem by parallelizing it over CPUs, making use of the OpenMP interface for multithreading:
set_device('cpp_standalone')
prefs.devices.cpp_standalone.openmp_threads = 4
Using the Brian2GeNN interface, this simulation could also run on a GPU (needs a CUDA-enabled NVIDIA GPU, and an installation of brian2genn
):
# These two lines would replace the above two lines:
# import brian2genn
# set_device('genn')
We will first set some general constants for our model (a HH-type model with an injected current):
area = 20000*umetre**2
Cm = (1*ufarad*cm**-2) * area
gl = (5e-5*siemens*cm**-2) * area
El = -60*mV
EK = -90*mV
ENa = 50*mV
g_kd = (30*msiemens*cm**-2) * area
VT = -63*mV
We will explore the effect of the $g_{Na}$ conductance, and the effect of the injected current $I$, varying each over 100 values:
g_na_values = np.linspace(10, 100, num=100)*msiemens*cm**-2 * area
I_values = np.linspace(0, 20, num=100)*pA
We now define the model, with $g_{Na}$ and $I$ as parameters, set independently for each neuron, and create one neuron for each combination of parameters:
eqs = Equations('''
dv/dt = (gl*(El-v)-
g_na*(m*m*m)*h*(v-ENa)-
g_kd*(n*n*n*n)*(v-EK) + I)/Cm : volt
dm/dt = alpha_m*(1-m)-beta_m*m : 1
dn/dt = alpha_n*(1-n)-beta_n*n : 1
dh/dt = alpha_h*(1-h)-beta_h*h : 1
alpha_m = 0.32*(mV**-1)*(13*mV-v+VT)/
(exp((13*mV-v+VT)/(4*mV))-1.)/ms : Hz
beta_m = 0.28*(mV**-1)*(v-VT-40*mV)/
(exp((v-VT-40*mV)/(5*mV))-1)/ms : Hz
alpha_h = 0.128*exp((17*mV-v+VT)/(18*mV))/ms : Hz
beta_h = 4./(1+exp((40*mV-v+VT)/(5*mV)))/ms : Hz
alpha_n = 0.032*(mV**-1)*(15*mV-v+VT)/
(exp((15*mV-v+VT)/(5*mV))-1.)/ms : Hz
beta_n = .5*exp((10*mV-v+VT)/(40*mV))/ms : Hz
I : amp (constant)
g_na : siemens (constant)
''')
neuron = NeuronGroup(len(g_na_values)*len(I_values), eqs,
method='exponential_euler',
threshold='v>-20*mV', refractory='v>-20*mV')
neuron.v = El
spike_mon = SpikeMonitor(neuron)
We now create all combinations of the explored parameters, and assign them to the individual neurons. Running the "network" of neurons will then explore all the parameter combinations.
all_g_na_values, all_I_values = np.meshgrid(g_na_values, I_values)
all_g_na_values = all_g_na_values.flat[:]
all_I_values = all_I_values.flat[:]
neuron.g_na = all_g_na_values
neuron.I = all_I_values
We start the run and determine the firing rate for each neuron:
run(10*second)
rates = spike_mon.count/(10*second)/Hz
We can now plot the firing rate as a function of the two explored parameters (in this example here, vertical slices of the graph can also be interpreted as the f/I curve of the neuron for varying values of $g_{Na}$):
I_index = ((neuron.I - np.min(I_values)) / np.diff(I_values)[0]).round().astype('int')
Na_index = ((neuron.g_na - np.min(g_na_values)) / np.diff(g_na_values)[0]).round().astype('int')
matrix = np.full((len(I_values), len(g_na_values)), np.nan)
matrix[I_index, Na_index] = rates
norm = mpl.colors.BoundaryNorm(np.arange(0, 19), plt.cm.viridis.N)
fig, ax = plt.subplots()
m = ax.imshow(matrix, norm=norm)
# We do manual ticks, easier than using extent and getting the scaling right
ticks = [0, 49, 99]
ax.set(xticks=ticks, xticklabels=['%.1f' % (g_na_values[i]/(mS*cm**-2 * area)) for i in ticks],
yticks=ticks, yticklabels=['%.1f' % (I_values[i]/pA) for i in ticks],
xlabel='$g_{Na}$ (mS/cm²)', ylabel='$I$ (nA)')
ax.axes.invert_yaxis()
cbar = fig.colorbar(m)
cbar.set_label('number of spikes')