To run the code below:
SHIFT+ENTER
on your keyboard or press the play button
() in the toolbar above.Feel free to create new cells using the plus button
(), or pressing SHIFT+ENTER
while this cell
is selected.
This example is a crude "pitch detector" network that performs autocorrelation of an audio signal. It works with coincidence detector neurons that receive two copies of the input (which has been transformed into spikes by an equally crude "periphery neuron" model), with a certain delay between the two inputs. Depending on their respective delay, neurons are sensitive to different periodicities.
The example shows how Brian's high-level model descriptions can be seemlessly combined with low-level code in a target language, in this case C++. Such code can be necessary to extend Brian functionalities without sacrificing performance, e.g. in applications that necessitate real-time processing of external stimuli.
In this code, we use this mechanism to provide two possible sources of audio input: a pre-recorded audio file (use_microphone = False
), or real-time input from a microphone (use_microphone = True
). For the latter, the portaudio
library needs to be installed. Also note that access to the computers microphone is not possible when running the notebook on an external server such as mybinder.
from brian2 import *
import os
We'll use the high-performance C++ standalone mode, otherwise we are not guaranteed that processing is faster than realtime (necessary when using microphone input).
set_device('cpp_standalone', directory='example_4')
We first set a few global parameters of the model:
sample_rate = 44.1*kHz
buffer_size = 128
defaultclock.dt = 1/sample_rate
runtime = 4.5*second
# Receptor neurons ("ear")
max_delay = 20*ms # 50 Hz
tau_ear = 1*ms
tau_th = 5*ms
# Coincidence detectors
min_freq = 50*Hz
max_freq = 1000*Hz
num_neurons = 300
tau = 1*ms
sigma = .1
The model equations (see code further down below) refer to a get_sample
function that returns the next available audio sample. Since we cannot express this function as mathematical equations, we directly provide its implementation in the target language C++. Since the code is relatively long we do not include it directly here, but instead store the source code in a separate file. We provide two such files, sound_from_mic.cpp
for sound input from a microphone (used when use_microphone
is set to True
), and sound_from_file.cpp
for sound read out from an uncompressed WAV audio file. These files will be compiled when needed because we provide their names as arguments to the sources
keyword. Similarly, we include the function declaration that is present in the sound_input.h
header file (identical for both cases), by specifying it to the headers
keyword. We further customize the code that gets compiled by providing preprocessor macros via the define_macros
keyword. Finally, since we are making use of the portaudio
library for microphone input, we link to the portaudio
library via the libraries
keyword:
use_microphone = False
if use_microphone:
# Now comes the connection to the microphone code.
@implementation('cpp','//actual code in sound_from_mic.cpp',
sources=[os.path.abspath('sound_from_mic.cpp')],
headers=['"{}"'.format(os.path.abspath('sound_input.h'))],
libraries=['portaudio'],
define_macros=[('BUFFER_SIZE', buffer_size),
('SAMPLE_RATE', sample_rate/Hz)])
@check_units(t=second, result=1)
def get_sample(t):
raise NotImplementedError('Use a C++-based code generation target.')
else:
# Instead of using the microphone, use a sound file
@implementation('cpp','//actual code in sound_from_file.cpp',
sources=[os.path.abspath('sound_from_file.cpp')],
headers=['"{}"'.format(os.path.abspath('sound_input.h'))],
define_macros=[('FILENAME', r'\"{}\"'.format(os.path.abspath('scale_flute.wav')))])
@check_units(t=second, result=1)
def get_sample(t):
raise NotImplementedError('Use a C++-based code generation target.')
We now specify our neural and synaptic model, making use of the get_sample
function as if it were one of the standard functions provided by Brian:
gain = 50
# Note that the `get_sample` function does not actually make use of the time `t` that it is given, for simplicity
# it assumes that it is called only once per time step. This is actually enforced by using our `constant over dt`
# feature -- the variable `sound` can be used in several places (which is important here, since we want to
# record it as well):
eqs_ear = '''
dx/dt = (sound - x)/tau_ear: 1 (unless refractory)
dth/dt = (0.1*x - th)/tau_th : 1
sound = clip(get_sample(t), 0, inf) : 1 (constant over dt)
'''
receptors = NeuronGroup(1, eqs_ear, threshold='x>th', reset='x=0; th = th*2.5 + 0.01',
refractory=2*ms, method='exact')
receptors.th = 1
sound_mon = StateMonitor(receptors, 'sound', record=0)
eqs_neurons = '''
dv/dt = -v/tau+sigma*(2./tau)**.5*xi : 1
freq : Hz (constant)
'''
neurons = NeuronGroup(num_neurons, eqs_neurons, threshold='v>1', reset='v=0',
method='euler')
neurons.freq = 'exp(log(min_freq/Hz)+(i*1.0/(num_neurons-1))*log(max_freq/min_freq))*Hz'
synapses = Synapses(receptors, neurons, on_pre='v += 0.5',
multisynaptic_index='k')
synapses.connect(n=2) # one synapse without delay; one with delay
synapses.delay['k == 1'] = '1/freq_post'
We record the spikes of the "pitch detector" neurons, and run the simulation:
spikes = SpikeMonitor(neurons)
run(runtime)
After the simulation ran through, we plot the raw sound input as well as its spectrogram, and the spiking activity of the detector neurons:
from plotly import tools
from plotly.offline import iplot, init_notebook_mode
import plotly.graph_objs as go
from scipy.signal import spectrogram
init_notebook_mode(connected=True)
fig = tools.make_subplots(5, 1, shared_xaxes=True,
specs=[[{}], [{'rowspan': 2}], [None], [{'rowspan': 2}], [None]],
print_grid=False
# subplot_titles=('Raw sound signal', 'Spectrogram of sound signal',
# 'Spiking activity')
)
trace = go.Scatter(x=sound_mon.t/second,
y=sound_mon.sound[0],
name='sound signal',
mode='lines',
line={'color':'#1f77b4'},
showlegend=False
)
fig.append_trace(trace, 1, 1)
f, t, Sxx = spectrogram(sound_mon.sound[0], fs=sample_rate/Hz, nperseg=2**12, window='hamming')
trace = go.Heatmap(x=t, y=f, z=10*np.log10(Sxx), showscale=False,
colorscale='Viridis', name='PSD')
fig.append_trace(trace, 2, 1)
trace = go.Scatter(x=spikes.t/second,
y=neurons.freq[spikes.i]/Hz,
marker={'symbol': 'line-ns', 'line': {'width': 1, 'color':'#1f77b4'},
'color':'#1f77b4'},
mode='markers',
name='spikes', showlegend=False)
fig.append_trace(trace, 4, 1)
fig['layout'].update(xaxis={'title': 'time (in s)',
'range': (0.4, runtime/second)},
yaxis1={'title': 'amplitude',
'showticklabels': False},
yaxis2={'type': 'log',
'range': (0.9*np.log10(min_freq/Hz), 1.1*np.log10(500)),
'title': 'Frequency (Hz)'},
yaxis3={'type': 'log',
'range': (0.9*np.log10(min_freq/Hz), 1.1*np.log10(500)),
'title': 'Preferred\nFrequency (Hz)'})
iplot(fig)
If you ran the above code for the pre-recorded sound file, you should clearly see that four separate, ascending notes were played.