This is a hands-on tutorial for complete newcomers to Essentia. Essentia combines the power of computation speed of the main C++ code with the Python environment which makes fast prototyping and scientific reseach very easy.
First and foremost, if you are a newbie to Python, we recommend you to use Ipython interactive shell instead of the standard python interpreter. Optionally, if you are familiar with python notebooks, you may want to use one created for this tutorial for a more interactive experience. It can be found in the src/examples/tutorial
folder in the essentia_python_tutorial.ipynb
file. Read how to use python notebooks here.
You should have the NumPy package installed, which gives Python the ability to work with vectors and matrices in pretty much the same way as Matlab. You can also install SciPy, which provides functionality similar to Matlab’s toolboxes, although we won’t be using it in this tutorial. You should have the matplotlib package installed if you want to be able to do some plotting. Other recommended packages include scikit-learn for data analysis and machine learning and seaborn for visualization.
The big strength of Essentia is in its considerably large collection of algorithms for audio processing and analysis which have been optimized and tested and which you can rely on to build your own signal analysis. That is, often you do not have to chase around lots of toolboxes to be able to achieve what you want. For more details on the algorithms, have a look either at the algorithms overview or at the complete reference.
In this section we will focus on how to use Essentia in the standard mode (think Matlab). There is another section that you can read afterwards about using the streaming mode.
We will have a look at some basic functionality:
Note: all the following commands need to be typed in a python interpreter. It is highly recommended to use IPython, and to start it with the --pylab
option to have interactive plots.
Let’s investigate a bit the Essentia package.
# first, we need to import our essentia module. It is aptly named 'essentia'!
import essentia
# as there are 2 operating modes in essentia which have the same algorithms,
# these latter are dispatched into 2 submodules:
import essentia.standard
import essentia.streaming
# let's have a look at what is in there
print(dir(essentia.standard))
# you can also do it by using autocompletion in IPython, typing "essentia.standard." and pressing Tab
['AfterMaxToBeforeMaxEnergyRatio', 'AllPass', 'AudioLoader', 'AudioOnsetsMarker', 'AudioWriter', 'AutoCorrelation', 'BFCC', 'BPF', 'BandPass', 'BandReject', 'BarkBands', 'BeatTrackerDegara', 'BeatTrackerMultiFeature', 'Beatogram', 'BeatsLoudness', 'BinaryOperator', 'BinaryOperatorStream', 'BpmHistogram', 'BpmHistogramDescriptors', 'BpmRubato', 'CartesianToPolar', 'CentralMoments', 'Centroid', 'ChordsDescriptors', 'ChordsDetection', 'ChordsDetectionBeats', 'Chromagram', 'Clipper', 'ConstantQ', 'Crest', 'CrossCorrelation', 'CubicSpline', 'DCRemoval', 'DCT', 'Danceability', 'Decrease', 'Derivative', 'DerivativeSFX', 'Dissonance', 'DistributionShape', 'Duration', 'DynamicComplexity', 'ERBBands', 'EasyLoader', 'EffectiveDuration', 'Energy', 'EnergyBand', 'EnergyBandRatio', 'Entropy', 'Envelope', 'EqloudLoader', 'EqualLoudness', 'Extractor', 'FFT', 'FFTC', 'FadeDetection', 'Flatness', 'FlatnessDB', 'FlatnessSFX', 'Flux', 'FrameCutter', 'FrameGenerator', 'FrameToReal', 'FreesoundExtractor', 'FrequencyBands', 'GFCC', 'GeometricMean', 'HFC', 'HPCP', 'HarmonicBpm', 'HarmonicMask', 'HarmonicModelAnal', 'HarmonicPeaks', 'HighPass', 'HighResolutionFeatures', 'HprModelAnal', 'HpsModelAnal', 'IDCT', 'IFFT', 'IIR', 'Inharmonicity', 'InstantPower', 'Intensity', 'Key', 'KeyExtractor', 'LPC', 'Larm', 'Leq', 'LevelExtractor', 'LogAttackTime', 'LoopBpmConfidence', 'LoopBpmEstimator', 'Loudness', 'LoudnessEBUR128', 'LoudnessVickers', 'LowLevelSpectralEqloudExtractor', 'LowLevelSpectralExtractor', 'LowPass', 'MFCC', 'Magnitude', 'MaxFilter', 'MaxMagFreq', 'MaxToTotal', 'Mean', 'Median', 'MelBands', 'MetadataReader', 'Meter', 'MinToTotal', 'MonoLoader', 'MonoMixer', 'MonoWriter', 'MovingAverage', 'MultiPitchKlapuri', 'MultiPitchMelodia', 'Multiplexer', 'MusicExtractor', 'NoiseAdder', 'NoveltyCurve', 'NoveltyCurveFixedBpmEstimator', 'OddToEvenHarmonicEnergyRatio', 'OnsetDetection', 'OnsetDetectionGlobal', 'OnsetRate', 'Onsets', 'OverlapAdd', 'PCA', 'Panning', 'PeakDetection', 'PercivalBpmEstimator', 'PercivalEnhanceHarmonics', 'PercivalEvaluatePulseTrains', 'PitchContourSegmentation', 'PitchContours', 'PitchContoursMelody', 'PitchContoursMonoMelody', 'PitchContoursMultiMelody', 'PitchFilter', 'PitchMelodia', 'PitchSalience', 'PitchSalienceFunction', 'PitchSalienceFunctionPeaks', 'PitchYin', 'PitchYinFFT', 'PolarToCartesian', 'PoolAggregator', 'PowerMean', 'PowerSpectrum', 'PredominantPitchMelodia', 'RMS', 'RawMoments', 'ReplayGain', 'Resample', 'ResampleFFT', 'RhythmDescriptors', 'RhythmExtractor', 'RhythmExtractor2013', 'RhythmTransform', 'RollOff', 'SBic', 'Scale', 'SilenceRate', 'SineModelAnal', 'SineModelSynth', 'SineSubtraction', 'SingleBeatLoudness', 'SingleGaussian', 'Slicer', 'SpectralCentroidTime', 'SpectralComplexity', 'SpectralContrast', 'SpectralPeaks', 'SpectralWhitening', 'Spectrum', 'SpectrumCQ', 'SpectrumToCent', 'Spline', 'SprModelAnal', 'SprModelSynth', 'SpsModelAnal', 'SpsModelSynth', 'StartStopSilence', 'StereoDemuxer', 'StereoMuxer', 'StereoTrimmer', 'StochasticModelAnal', 'StochasticModelSynth', 'StrongDecay', 'StrongPeak', 'SuperFluxExtractor', 'SuperFluxNovelty', 'SuperFluxPeaks', 'TCToTotal', 'TempoScaleBands', 'TempoTap', 'TempoTapDegara', 'TempoTapMaxAgreement', 'TempoTapTicks', 'TonalExtractor', 'TonicIndianArtMusic', 'TriangularBands', 'TriangularBarkBands', 'Trimmer', 'Tristimulus', 'TuningFrequency', 'TuningFrequencyExtractor', 'UnaryOperator', 'UnaryOperatorStream', 'Variance', 'Vibrato', 'WarpedAutoCorrelation', 'Windowing', 'YamlInput', 'YamlOutput', 'ZeroCrossingRate', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__spec__', '_c', '_create_essentia_class', '_create_python_algorithms', '_essentia', '_reloadAlgorithms', '_sys', 'algorithmInfo', 'algorithmNames', 'copy', 'essentia', 'iteritems']
This list contains all Essentia algorithms available in standard mode. You can have an inline help for the algorithms you are interested in using help
command (you can also see it by typing "MFCC?" in IPython). You can also use our online algorithm reference.
help(essentia.standard.MFCC)
Help on class Algo in module essentia.standard: class Algo(Algorithm) | MFCC | | | Inputs: | | [vector_real] spectrum - the audio spectrum | | | Outputs: | | [vector_real] bands - the energies in mel bands | [vector_real] mfcc - the mel frequency cepstrum coefficients | | | Parameters: | | dctType: | integer ∈ [2,3] (default = 2) | the DCT type | | highFrequencyBound: | real ∈ (0,inf) (default = 11000) | the upper bound of the frequency range [Hz] | | inputSize: | integer ∈ (1,inf) (default = 1025) | the size of input spectrum | | liftering: | integer ∈ [0,inf) (default = 0) | the liftering coefficient. Use '0' to bypass it | | logType: | string ∈ {natural,dbpow,dbamp,log} (default = "dbamp") | logarithmic compression type. Use 'dbpow' if working with power and 'dbamp' | if working with magnitudes | | lowFrequencyBound: | real ∈ [0,inf) (default = 0) | the lower bound of the frequency range [Hz] | | normalize: | string ∈ {unit_sum,unit_max} (default = "unit_sum") | 'unit_max' makes the vertex of all the triangles equal to 1, 'unit_sum' | makes the area of all the triangles equal to 1 | | numberBands: | integer ∈ [1,inf) (default = 40) | the number of mel-bands in the filter | | numberCoefficients: | integer ∈ [1,inf) (default = 13) | the number of output mel coefficients | | sampleRate: | real ∈ (0,inf) (default = 44100) | the sampling rate of the audio signal [Hz] | | type: | string ∈ {magnitude,power} (default = "power") | use magnitude or power spectrum | | warpingFormula: | string ∈ {slaneyMel,htkMel} (default = "slaneyMel") | The scale implementation type. use 'htkMel' to emulate its behaviour. | Default slaneyMel. | | weighting: | string ∈ {warping,linear} (default = "warping") | type of weighting function for determining triangle area | | | Description: | | This algorithm computes the mel-frequency cepstrum coefficients of a | spectrum. As there is no standard implementation, the MFCC-FB40 is used by | default: | - filterbank of 40 bands from 0 to 11000Hz | - take the log value of the spectrum energy in each mel band | - DCT of the 40 bands down to 13 mel coefficients | There is a paper describing various MFCC implementations [1]. | | The parameters of this algorithm can be configured in order to behave like | HTK [3] as follows: | - type = 'magnitude' | - warpingFormula = 'htkMel' | - weighting = 'linear' | - highFrequencyBound = 8000 | - numberBands = 26 | - numberCoefficients = 13 | - normalize = 'unit_max' | - dctType = 3 | - logType = 'log' | - liftering = 22 | | In order to completely behave like HTK the audio signal has to be scaled by | 2^15 before the processing and if the Windowing and FrameCutter algorithms | are used they should also be configured as follows. | | FrameGenerator: | - frameSize = 1102 | - hopSize = 441 | - startFromZero = True | - validFrameThresholdRatio = 1 | | Windowing: | - type = 'hamming' | - size = 1102 | - zeroPadding = 946 | - normalized = False | | This algorithm depends on the algorithms MelBands and DCT and therefore | inherits their parameter restrictions. An exception is thrown if any of these | restrictions are not met. The input "spectrum" is passed to the MelBands | algorithm and thus imposes MelBands' input requirements. Exceptions are | inherited by MelBands as well as by DCT. | | IDCT can be used to compute smoothed Mel Bands. In order to do this: | - compute MFCC | - smoothedMelBands = 10^(IDCT(MFCC)/20) | | Note: The second step assumes that 'logType' = 'dbamp' was used to compute | MFCCs, otherwise that formula should be changed in order to be consistent. | | References: | [1] T. Ganchev, N. Fakotakis, and G. Kokkinakis, "Comparative evaluation | of various MFCC implementations on the speaker verification task," in | International Conference on Speach and Computer (SPECOM’05), 2005, | vol. 1, pp. 191–194. | | [2] Mel-frequency cepstrum - Wikipedia, the free encyclopedia, | http://en.wikipedia.org/wiki/Mel_frequency_cepstral_coefficient | | [3] Young, S. J., Evermann, G., Gales, M. J. F., Hain, T., Kershaw, D., | Liu, X., … Woodland, P. C. (2009). The HTK Book (for HTK Version 3.4). | Construction, (July 2000), 384, https://doi.org/http://htk.eng.cam.ac.uk | | Method resolution order: | Algo | Algorithm | builtins.object | | Methods defined here: | | __call__(self, *args) | | __init__(self, **kwargs) | | __str__(self) | | compute(self, *args) | | configure(self, **kwargs) | | ---------------------------------------------------------------------- | Data descriptors defined here: | | __dict__ | dictionary for instance variables (if defined) | | __weakref__ | list of weak references to the object (if defined) | | ---------------------------------------------------------------------- | Data and other attributes defined here: | | __struct__ = {'category': 'Spectral', 'description': 'This algorithm c... | | ---------------------------------------------------------------------- | Methods inherited from Algorithm: | | __compute__(...) | compute the algorithm | | __configure__(...) | Configure the algorithm | | __new__(*args, **kwargs) from builtins.type | Create and return a new object. See help(type) for accurate signature. | | getDoc(...) | Returns the doc string for the algorithm | | getStruct(...) | Returns the doc struct for the algorithm | | inputNames(...) | Returns the names of the inputs of the algorithm. | | inputType(...) | Returns the type of the input given by its name | | name(...) | Returns the name of the algorithm. | | outputNames(...) | Returns the names of the outputs of the algorithm. | | paramType(...) | Returns the type of the parameter given by its name | | paramValue(...) | Returns the value of the parameter or None if not yet configured | | parameterNames(...) | Returns the names of the parameters for this algorithm. | | reset(...) | Reset the algorithm to its initial state (if any).
Before you can use algorithms in Essentia, you first need to instantiate (create) them. When doing so, you can give them parameters which they may need to work properly, such as the filename of the audio file in the case of an audio loader.
Once you have instantiated an algorithm, nothing has happened yet, but your algorithm is ready to be used and works like a function, that is, you have to call it to make stuff happen (technically, it is a function object).
Essentia has a selection of audio loaders:
# we start by instantiating the audio loader:
loader = essentia.standard.MonoLoader(filename='../../../test/audio/recorded/dubstep.wav')
# and then we actually perform the loading:
audio = loader()
By default, the MonoLoader will output audio with 44100Hz samplerate downmixed to mono. To make sure that this actually worked, let's plot a 1-second slice of audio, from t = 1 sec to t = 2sec:
# pylab contains the plot() function, as well as figure, etc... (same names as Matlab)
from pylab import plot, show, figure, imshow
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.figsize'] = (15, 6) # set plot sizes to something larger than default
plot(audio[1*44100:2*44100])
plt.title("This is how the 2nd second of this audio looks like:")
show() # unnecessary if you started "ipython --pylab"
Note that if you have started IPython with the --pylab
option, the call to
show() is not necessary, and you don't have to close the plot to regain control of your terminal.
from essentia.standard import *
w = Windowing(type = 'hann')
spectrum = Spectrum() # FFT() would return the complex FFT, here we just want the magnitude spectrum
mfcc = MFCC()
Once algorithms have been instantiated, they work like normal functions. Note that the MFCC algorithm returns two values: the band energies and the coefficients, and that you can get (unpack) them the same way as in Matlab. Let's compute and plot the spectrum, mel band energies, and MFCCs for a frame of audio:
frame = audio[6*44100 : 6*44100 + 1024]
spec = spectrum(w(frame))
mfcc_bands, mfcc_coeffs = mfcc(spec)
plot(spec)
plt.title("The spectrum of a frame:")
show() # unnecessary if you started "ipython --pylab"
plot(mfcc_bands)
plt.title("Mel band spectral energies of a frame:")
show() # unnecessary if you started "ipython --pylab"
plot(mfcc_coeffs)
plt.title("First 13 MFCCs of a frame:")
show() # unnecessary if you started "ipython --pylab"
Now let's compute the mel band energies and MFCCs in all frames.
The way we would do it in Matlab is by slicing the frames manually (the first frame starts at moment 0, i.e., with the first audio sample):
mfccs = []
melbands = []
frameSize = 1024
hopSize = 512
for fstart in range(0, len(audio)-frameSize, hopSize):
frame = audio[fstart:fstart+frameSize]
mfcc_bands, mfcc_coeffs = mfcc(spectrum(w(frame)))
mfccs.append(mfcc_coeffs)
melbands.append(mfcc_bands)
This is ok, but there is a much nicer way of computing frames in Essentia by using FrameGenerator, the FrameCutter algorithm wrapped into a python generator:
mfccs = []
melbands = []
for frame in FrameGenerator(audio, frameSize=1024, hopSize=512, startFromZero=True):
mfcc_bands, mfcc_coeffs = mfcc(spectrum(w(frame)))
mfccs.append(mfcc_coeffs)
melbands.append(mfcc_bands)
# transpose to have it in a better shape
# we need to convert the list to an essentia.array first (== numpy.array of floats)
mfccs = essentia.array(mfccs).T
melbands = essentia.array(melbands).T
# and plot
imshow(melbands[:,:], aspect = 'auto', origin='lower', interpolation='none')
plt.title("Mel band spectral energies in frames")
show() # unnecessary if you started "ipython --pylab"
imshow(mfccs[1:,:], aspect='auto', origin='lower', interpolation='none')
plt.title("MFCCs in frames")
show() # unnecessary if you started "ipython --pylab"
You can configure frame and hop size of the frame generator, and whether to start the first frame or to center it at zero position in time. For the complete list of available parameters see the documentation for the FrameCutter.
Note, that when plotting MFCCs, we ignored the first coefficient to disregard the power of the signal and only plot its spectral shape.
A Pool is a container similar to a C++ map or Python dict which can contain any type of values (easy in Python, not as much in C++...). Values are stored in there using a name which represent the full path to these values; dot ('.') characters are used as separators. You can think of it as a directory tree, or as namespace(s) + local name.
Examples of valid names are: "bpm"
, "lowlevel.mfcc"
, "highlevel.genre.rock.probability"
, etc...
Let's redo the previous computations using a pool. The pool has the nice advantage that the data you get out of it is already in an essentia.array
format (which is equal to numpy.array of floats), so you can call transpose (.T
) directly on it.
pool = essentia.Pool()
for frame in FrameGenerator(audio, frameSize = 1024, hopSize = 512, startFromZero=True):
mfcc_bands, mfcc_coeffs = mfcc(spectrum(w(frame)))
pool.add('lowlevel.mfcc', mfcc_coeffs)
pool.add('lowlevel.mfcc_bands', mfcc_bands)
imshow(pool['lowlevel.mfcc_bands'].T, aspect = 'auto', origin='lower', interpolation='none')
plt.title("Mel band spectral energies in frames")
show() # unnecessary if you started "ipython --pylab"
imshow(pool['lowlevel.mfcc'].T[1:,:], aspect='auto', origin='lower', interpolation='none')
plt.title("MFCCs in frames")
show() # unnecessary if you started "ipython --pylab"
output = YamlOutput(filename = 'mfcc.sig') # use "format = 'json'" for JSON output
output(pool)
# or as a one-liner:
YamlOutput(filename = 'mfcc.sig')(pool)
This can take a while as we actually write the MFCCs for all the frames, which can be quite heavy depending on the duration of your audio file.
Now let's assume we do not want all the frames but only the mean and variance of those frames. We can do this using the PoolAggregator algorithm on our pool with frame value to get a new pool with the aggregated descriptors (check the documentation for this algorithm to get an idea of other statistics it can compute):
# compute mean and variance of the frames
aggrPool = PoolAggregator(defaultStats = [ 'mean', 'var' ])(pool)
print('Original pool descriptor names:')
print(pool.descriptorNames())
print('')
print('Aggregated pool descriptor names:')
print(aggrPool.descriptorNames())
# and ouput those results in a file
YamlOutput(filename = 'mfccaggr.sig')(aggrPool)
Original pool descriptor names: ['lowlevel.mfcc', 'lowlevel.mfcc_bands'] Aggregated pool descriptor names: ['lowlevel.mfcc.mean', 'lowlevel.mfcc.var', 'lowlevel.mfcc_bands.mean', 'lowlevel.mfcc_bands.var']
This is how the file with aggregated descriptors looks like:
!cat mfccaggr.sig
metadata: version: essentia: "2.1-dev" lowlevel: mfcc: mean: [-770.771728516, 246.557647705, 53.5677185059, 1.70909059048, -35.5930786133, -27.0709495544, -12.4148387909, -19.2304668427, -33.986038208, -23.4126434326, -15.8186225891, -5.1132478714, -2.86430335045] var: [9531.9296875, 2612.62597656, 1268.72875977, 442.906768799, 258.520568848, 229.063858032, 168.463638306, 126.90486145, 172.914840698, 142.858963013, 209.542709351, 237.36315918, 588.467102051] mfcc_bands: mean: [3.09789697894e-06, 0.0018204189837, 0.00687531381845, 0.00559488125145, 0.00746234040707, 0.00762519706041, 0.00263760378584, 0.00176807912067, 0.00187411252409, 0.0010101441294, 0.000384628627216, 8.97606587387e-05, 0.000103173675598, 0.000462994532427, 0.000481149676489, 0.000150407780893, 4.3479638407e-05, 9.8532436823e-06, 3.4149172734e-06, 4.67248537461e-06, 3.91657658838e-06, 1.8994775246e-06, 2.5756589821e-06, 1.52094037276e-06, 1.15387149435e-06, 3.3445369354e-06, 1.65835001553e-06, 2.04684874916e-06, 1.96311066247e-06, 1.5418397652e-06, 1.18413072414e-06, 1.06164293356e-06, 5.61618151096e-07, 5.55542726488e-07, 1.16678609174e-06, 9.67434175436e-07, 5.79169636694e-07, 4.31736594919e-07, 3.07267100652e-07, 1.74535870201e-07] var: [5.49797274374e-09, 2.71185967904e-06, 3.54826916009e-05, 1.4284183635e-05, 5.45716975466e-05, 5.73034012632e-05, 1.09859265649e-05, 5.82664006288e-06, 6.33308718534e-06, 2.67984637503e-06, 9.67446794675e-07, 1.09977271734e-07, 4.5999644982e-08, 1.01382534012e-06, 2.36311120716e-06, 1.19640631624e-07, 2.97720443854e-08, 1.96152050158e-09, 4.33603347672e-10, 2.12821246737e-10, 1.17798229504e-10, 3.65933568169e-11, 6.12296602309e-11, 2.86074445383e-11, 1.31610087412e-11, 1.15442384818e-10, 6.51916090555e-11, 8.07163641481e-11, 7.98162577698e-11, 4.8446673756e-11, 2.83182435834e-11, 3.88169184296e-11, 9.29356158003e-12, 1.18757833428e-11, 4.44560638302e-11, 3.39185728115e-11, 8.12059273991e-12, 6.04115542313e-12, 3.49459740659e-12, 1.1808879612e-12]
There is not much more to know for using Essentia in standard mode in Python, the basics are:
You can find a number of python examples in the src/examples/tutorial
folder in the code, including:
In this section we will consider how to use Essentia in streaming mode.
The main difference between standard and streaming is that the standard mode is imperative while the streaming mode is declarative. That means that in standard mode, you tell exactly the computer what to do, whereas in the streaming mode, you "declare" what is needed to be done, and you let the computer do it itself. One big advantage of the streaming mode is that the memory consumption is greatly reduced, as you don't need to load the entire audio in memory. Let's have a look at it.
As usual, first import the essentia module:
import essentia
from essentia.streaming import *
Instantiate our algorithms:
loader = MonoLoader(filename = '../../../test/audio/recorded/dubstep.wav')
frameCutter = FrameCutter(frameSize = 1024, hopSize = 512)
w = Windowing(type = 'hann')
spec = Spectrum()
mfcc = MFCC()
In streaming, instead of calling algorithms like functions, we need to connect their inputs and outputs. This is done using the >> operator.
For example, the graph we want to connect looks like this:
---------- ------------ ----------- -------------- --------------
MonoLoader FrameCutter Windowing Spectrum MFCC
audio ---> signal frame ---> frame frame ---> frame spectrum ---> spectrum bands ---> ???
mfcc ---> ???
---------- ------------ ----------- -------------- --------------
loader.audio >> frameCutter.signal
frameCutter.frame >> w.frame >> spec.frame
spec.spectrum >> mfcc.spectrum
<essentia.streaming._StreamConnector at 0x7f11817e3940>
When building a network, all inputs need to be connected, no matter what, otherwise the network cannot be started and we get an error message:
essentia.run(loader)
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-17-5a68facf7b1d> in <module>() ----> 1 essentia.run(loader) /usr/local/lib/python3.5/dist-packages/essentia/__init__.py in run(gen) 146 if isinstance(gen, VectorInput) and not list(gen.connections.values())[0]: 147 raise EssentiaError('VectorInput is not connected to anything...') --> 148 return _essentia.run(gen) 149 150 log.debug(EPython, 'Successfully imported essentia python module (log fully available and synchronized with the C++ one)') RuntimeError: MFCC::bands is not connected to any sink...
In our case, the outputs of the MFCC algorithm were not connected anywhere. Let's store mfcc values in the pool and ignore bands values.
---------- ------------ ----------- -------------- --------------
MonoLoader FrameCutter Windowing Spectrum MFCC
audio ---> signal frame ---> frame frame ---> frame spectrum ---> spectrum bands ---> NOWHERE
mfcc ---> Pool: lowlevel.mfcc
---------- ------------ ----------- -------------- --------------
pool = essentia.Pool()
mfcc.bands >> None
mfcc.mfcc >> (pool, 'lowlevel.mfcc')
essentia.run(loader)
print('Pool contains %d frames of MFCCs' % len(pool['lowlevel.mfcc']))
Pool contains 592 frames of MFCCs
We first need to disconnect the old connection to the pool to avoid putting the same data in there again.
mfcc.mfcc.disconnect((pool, 'lowlevel.mfcc'))
We create a FileOutput and connect it. It is a special connection that has no input, because it can actually take any type of input (the other algorithms will complain if you try to connect an output to an input of a different type).
fileout = FileOutput(filename = 'mfccframes.txt')
mfcc.mfcc >> fileout
<essentia.streaming._create_streaming_algo.<locals>.StreamingAlgo at 0x7f11815d5d38>
Reset the network otherwise the loader in particular will not do anything useful, and rerun the network
essentia.reset(loader)
essentia.run(loader)
This is the resulting file (the first 10 lines correspond to the first 10 frames):
!head mfccframes.txt -n 10
[-430.671, 87.7917, -10.1204, -50.172, -17.9259, -36.4849, -17.5709, -5.72504, -16.6404, 8.64975, -7.41039, 5.7051, 7.18055] [-490.824, 101.549, 68.3375, 10.5324, 9.86464, -21.2722, -12.467, -11.8749, -24.2667, -8.02748, -26.5459, -25.3716, -31.5997] [-515.915, 90.4185, 54.5073, 25.2965, 18.2453, 1.56025, 10.0262, 21.2547, 2.83289, 7.16083, -25.8393, -22.4263, -29.8229] [-526.075, 76.321, 33.0371, 15.6267, 16.1482, 1.94901, 26.5443, 40.805, 20.866, 20.7323, -16.962, -23.6936, -39.9292] [-530.409, 62.8531, 17.8901, 17.2312, 19.4443, 6.44692, 35.9218, 37.0124, 9.91326, 30.9235, -10.691, -12.6595, -30.0003] [-532.03, 66.9765, 15.174, 4.41039, 6.51187, 18.4618, 41.4819, 30.0178, 13.5438, 19.5735, -19.7553, -2.62841, -12.9201] [-523.106, 85.9242, 15.2094, 11.4087, 9.95426, 19.4773, 20.8585, 27.0054, 19.3617, 19.016, -13.5927, -3.25358, -11.339] [-532.996, 90.4333, 13.19, 8.79797, 20.2316, 15.791, 23.7306, 34.2449, 11.5618, 20.3763, -18.6916, -10.9794, -20.2573] [-539.285, 74.0864, 20.9641, 18.1156, 11.1981, 6.7221, 25.9186, 38.2328, 8.60174, 16.578, -22.699, -19.8375, -27.6012] [-512.555, 60.0025, 25.2892, 3.13255, 18.0855, -2.79686, 22.4047, 25.8552, 6.91858, 11.1513, -10.3943, -17.6128, -8.85415]