The dataset we will be working with today is the Wakeman & Nelson (2015) "faces" dataset. During this experiment, participants were presented with a series of images, containing:
In this tutorial, we are going to use this dataset to explore the neural representational code within the visual cortex. From time to time, there will be green blocks indicating it's up to you to do something, like this one:
In the cell below, update the data_path
variable to point to where you have extracted the rsa-data.zip
file to.
(If you are running this on MyBinder then the data is located in the data
folder).
%matplotlib inline
import matplotlib as mpl
mpl.rcParams['figure.dpi'] = 90 # Tune this to make figures bigger/smaller
# Set this to where you've extracted `data.zip` to
data_path = "data"
Let's start by taking a look at the stimuli that were presented during the experiment.
I've put them in the stimuli
folder for you as .bmp
image files.
The Python Imaging Library (PIL) can open them and display them in this notebook.
We can use the notebook's native IPython.display.display
function if we want to display more than one image at once.
from PIL import Image
from IPython.display import display
# Show the first "famous" face and the first "scrambled" face
img_famous = Image.open(f"{data_path}/stimuli/f001.bmp")
img_scrambled = Image.open(f"{data_path}/stimuli/s001.bmp")
print(f"Famous face: {img_famous.width} x {img_famous.height} pixels")
display(img_famous)
print(f"Scrambled face: {img_scrambled.width} x {img_scrambled.height} pixels")
display(img_scrambled)
Famous face: 128 x 162 pixels
Scrambled face: 128 x 162 pixels
Loaded like this, the stimuli are in a representational space defined by their pixels. Each image is represented by 128 x 162 = 20736 values between 0 (black) and 255 (white). Let's create a Representational Dissimilarity Matrix (RDM) where images are compared based on the difference between their pixels. To get the pixels of an image, you can convert it to a NumPy array like this:
import numpy as np
pixels_famous = np.array(img_famous)
pixels_scrambled = np.array(img_scrambled)
print("Shape of the pixel array for the famous face:", pixels_famous.shape)
print("Shape of the pixel array for the scrambled face:", pixels_scrambled.shape)
Shape of the pixel array for the famous face: (162, 128) Shape of the pixel array for the scrambled face: (162, 128)
We can now compute the "dissimilarity" between the two images, based on their pixels. For this, we need to decide on a metric to use. The default metric used in the original publication (Kiegeskorte et al. 2008) was Pearson Correlation, so let's use that. Of course, correlation is a metric of similarity and we want a metric of dissimilarity. Let's make it easy on ourselves and just do $1 - r$.
from scipy.stats import pearsonr
similarity, _ = pearsonr(pixels_famous.flatten(), pixels_scrambled.flatten())
dissimilarity = 1 - similarity
print(f"The dissimilarity between the pixels of the famous and scrambled faces is: {dissimilarity:.3f}")
The dissimilarity between the pixels of the famous and scrambled faces is: 0.418
To construct the full RDM, we need to do this for all pairs of images. I'll talk you through the process, but I will let you do the coding for this. Ready? Let's go!
In the cell below, I've already constructed a list of all image files for you.
For first task is to load all of them (there are 450), convert them to NumPy arrays and concatenate them all together in a single big array called pixels
of shape n_images x width x height
.
from glob import glob
files = sorted(glob(f"{data_path}/stimuli/*.bmp"))
print(f"There are {len(files)} images to read.")
pixels = np.array([np.array(Image.open(f)) for f in files])# write your code here
There are 450 images to read.
If you did it correctly, then executing the cell below should tell us the shape of your big array, and verify its dimensions.
print("The dimensions of the `pixel` array are:", pixels.shape)
if pixels.shape == (450, 162, 128):
print("These dimensions are correct! 😊")
else:
print("These dimensions are not correct. 🤔")
The dimensions of the `pixel` array are: (450, 162, 128) These dimensions are correct! 😊
Now that you have all the images loaded in, computing the pairwise dissimilarities is a matter of looping over them and computing correlations.
We could do this manually, but we can make our life a lot easier by using MNE-RSA's compute_rdm
function.
It wants the big matrix as input and also takes a metric
parameter to select which dissimilarity metric to use.
Setting it to metric="correlation"
, which is also the default by the way, will make it use (1 - Pearson correlation) as a metric like we did manually above.
In the cell below, I've imported the function for you. I'll leave it up to you to call it properly (check its documentation if you're unsure).
from mne_rsa import compute_rdm
pixel_rdm = compute_rdm(pixels) # write the call to compute_dsm() here
If you did it correctly, executing the cell below will plot your RDM:
from mne_rsa import plot_rdms
plot_rdms(pixel_rdm);
Staring deeply into this RDM will reveal to you which images belonged to the "scrambled faces" class, as those pixels are quite different from the actual faces and each other. We also see that for some reason, the famous faces are a little more alike than the unknown faces.
The diagonal is all zeros. Take a moment to ponder why that would be.
compute_rdm
function is a wrapper around scipy.spatial.distance.pdist
.
This means that all the metrics supported by pdist
are also valid for compute_dsm
.
This also means that in MNE-RSA, the native format for an RDM is the so-called "condensed" form.
Since RDMs are symmetric, only the upper triangle is stored.
The scipy.spatial.distance.squareform
function can be used to go from a square matrix to its condensed form and back.
There are many sensible representations possible for images. One intriguing one is to create them using convolutional neural networks (CNNs). For example, there is the FaceNet model by Schroff et al. (2015) that can generate high-level representations, such that different photos of the same face have similar representations. I have run the stimulus images through FaceNet and recorded the generated embeddings for you to use:
store = np.load(f"{data_path}/stimuli/facenet_embeddings.npz")
filenames = store["filenames"]
embeddings = store["embeddings"]
print(f"For each of the 450 images, the embedding is a vector of length 512: {embeddings.shape}")
For each of the 450 images, the embedding is a vector of length 512: (450, 512)
I leave it up to you to construct the RDM based on the FaceNet embedding vectors using the compute_rdm
function.
Use Pearson correlation as dissimility metric and store the RDM in a variable called facenet_rdm
.
Make sure that the stimuli are in the same order as the pixel RDM we created earlier!
facenet_rdm = compute_rdm(embeddings) # write your code here
If you created the FaceNet RDM correctly, executing the cell below should plot both RDMs side-by-side:
plot_rdms([pixel_rdm, facenet_rdm], names=["pixels", "facenet"]);
We've seen how we can create RDMs using properties of the images or embeddings generated by a model. Now it's time to see how we create RDMs based on the MEG data. For that, we first load the epochs from a single participant.
import mne
epochs = mne.read_epochs(f"{data_path}/sub-02/sub-02-epo.fif")
epochs
Reading /home/vanvlm1/projects/neuroscience_tutorials/rsa/data/sub-02/sub-02-epo.fif ... Found the data of interest: t = -200.00 ... 2900.00 ms 0 CTF compensation matrices available Adding metadata with 2 columns 879 matching events found No baseline correction applied 0 projection items activated
General | ||
---|---|---|
Filename(s) | sub-02-epo.fif | |
MNE object type | EpochsFIF | |
Measurement date | 2009-04-09 at 11:04:14 UTC | |
Participant | ||
Experimenter | MEG | |
Acquisition | ||
Total number of events | 879 | |
Events counts |
face/famous/first: 147
face/famous/immediate: 78 face/famous/long: 66 face/unfamiliar/first: 149 face/unfamiliar/immediate: 65 face/unfamiliar/long: 79 scrambled/first: 150 scrambled/immediate: 71 scrambled/long: 74 |
|
Time range | -0.200 – 2.900 s | |
Baseline | -0.200 – 0.000 s | |
Sampling frequency | 220.00 Hz | |
Time points | 683 | |
Metadata | 879 rows × 2 columns | |
Channels | ||
Magnetometers | ||
Gradiometers | ||
EOG | ||
ECG | ||
Stimulus | ||
Head & sensor digitization | 137 points | |
Filters | ||
Highpass | 1.00 Hz | |
Lowpass | 40.00 Hz |
Each epoch corresponds to the presentation of an image, and the signal across the sensors over time can be used as the neural representation of that image. Hence, one could make a neural RDM, of for example the gradiometers, like this:
neural_rdm = compute_rdm(epochs.copy().pick("grad").crop(0.1, 0.2).get_data())
plot_rdms(neural_rdm);
To compute RSA scores, we want to compare the resulting neural RDM with the RDMs we've created earlier. However, if we inspect the neural RDM closely, we see that its rows and column don't line up with those of the previous RDMs. There are too many (879 vs. 450) and they are in the wrong order. Making sure that the RDMs match is an important and sometimes tricky part of RSA.
To help us out, a useful feature of MNE-Python is that epochs have an associated epochs.metadata
field.
This metadata is a Pandas DataFrame where each row contains information about the corresponding epoch.
The epochs in this tutorial come with some useful .metadata
already:
epochs.metadata
trigger | file | |
---|---|---|
0 | 13 | u032.bmp |
1 | 14 | u032.bmp |
2 | 13 | u088.bmp |
3 | 13 | u084.bmp |
4 | 5 | f123.bmp |
... | ... | ... |
882 | 5 | f016.bmp |
883 | 6 | f016.bmp |
884 | 5 | f002.bmp |
885 | 6 | f002.bmp |
886 | 7 | f150.bmp |
879 rows × 2 columns
While the trigger codes only indicate what type of stimulus was shown, the file
column of the metadata tells us the exact image.
Couple of challenges here: the stimuli where shown in a random order, stimuli were repeated twice during the experiment, and some epochs were dropped during preprocessing so not every image is necessarily present twice in the epochs
data. 😩
Luckily, MNE-RSA has a way to make our lives easier.
Let's take a look at the rdm_epochs
function, the Swiss army knife for computing RDMs from an MNE-Python epochs
object:
from mne_rsa import rdm_epochs
rdm_epochs?
Signature: rdm_epochs( epochs, noise_cov=None, spatial_radius=None, temporal_radius=None, dist_metric='correlation', dist_params={}, y=None, n_folds=1, picks=None, tmin=None, tmax=None, dropped_as_nan=False, ) Docstring: Generate RDMs in a searchlight pattern on epochs. Parameters ---------- epochs : instance of mne.Epochs The brain activity during the epochs. The event codes are used to distinguish between items. noise_cov : mne.Covariance | None When specified, the data will by normalized using the noise covariance. This is recommended in all cases, but a hard requirement when the data contains sensors of different types. Defaults to None. spatial_radius : floats | None The spatial radius of the searchlight patch in meters. All sensors within this radius will belong to the searchlight patch. Set to None to only perform the searchlight over time, flattening across sensors. Defaults to None. temporal_radius : float | None The temporal radius of the searchlight patch in seconds. Set to None to only perform the searchlight over sensors, flattening across time. Defaults to None. dist_metric : str The metric to use to compute the RDM for the epochs. This can be any metric supported by the scipy.distance.pdist function. See also the ``epochs_rdm_params`` parameter to specify and additional parameter for the distance function. Defaults to 'correlation'. dist_params : dict Extra arguments for the distance metric used to compute the RDMs. Refer to :mod:`scipy.spatial.distance` for a list of all other metrics and their arguments. Defaults to an empty dictionary. y : ndarray of int, shape (n_items,) | None For each Epoch, a number indicating the item to which it belongs. When ``None``, the event codes are used to differentiate between items. Defaults to ``None``. n_folds : int | sklearn.model_selection.BaseCrollValidator | None Number of cross-validation folds to use when computing the distance metric. Folds are created based on the ``y`` parameter. Specify ``None`` to use the maximum number of folds possible, given the data. Alternatively, you can pass a Scikit-Learn cross validator object (e.g. ``sklearn.model_selection.KFold``) to assert fine-grained control over how folds are created. Defaults to 1 (no cross-validation). picks : str | list | slice | None Channels to include. Slices and lists of integers will be interpreted as channel indices. In lists, channel *type* strings (e.g., ``['meg', 'eeg']``) will pick channels of those types, channel *name* strings (e.g., ``['MEG0111', 'MEG2623']`` will pick the given channels. Can also be the string values "all" to pick all channels, or "data" to pick data channels. ``None`` (default) will pick all MEG and EEG channels, excluding those maked as "bad". tmin : float | None When set, searchlight patches will only be generated from subsequent time points starting from this time point. This value is given in seconds. Defaults to ``None``, in which case patches are generated starting from the first time point. tmax : float | None When set, searchlight patches will only be generated up to and including this time point. This value is given in seconds. Defaults to ``None``, in which case patches are generated up to and including the last time point. dropped_as_nan : bool When this is set to ``True``, the drop log will be used to inject NaN values in the RDMs at the locations where a bad epoch was dropped. This is useful to ensure the dimensions of the RDM are the same, irregardless of any bad epochs that were dropped. Make sure to use ``ignore_nan=True`` when using RDMs with NaNs in them during subsequent RSA computations. Defaults to ``False``. .. versionadded:: 0.8 Yields ------ rdm : ndarray, shape (n_items, n_items) A RDM for each searchlight patch. File: ~/micromamba/lib/python3.11/site-packages/mne_rsa/sensor_level.py Type: function
In MNE-Python tradition, the function has a lot of parameters, but all-but-one have a default so you only have to specify the ones that are relevant to you. For example, to redo the neural RDM we created above, we could do something like:
neural_rdm_gen = rdm_epochs(epochs, tmin=0.1, tmax=0.2)
# dsm_epochs returns a generator of RDMs
# unpacking the first (and only) RDM from the generator
neural_rdm = next(neural_rdm_gen)
plot_rdms(neural_rdm);
Take note that rdm_epochs
returns a generator of RDMs.
This is because one of the main use-cases for MNE-RSA is to produce RDMs using sliding windows (in time and also in space), which can produce a large amount of RDMs that can take up a lot of memory of you're not careful.
y
parameter that solves Looking at the neural RDM above, something is clearly different from the one we made before.
This one has 9 rows and columns.
Closely inspecting the docstring of rdm_epochs
reveals that it is the y
parameter that is responsible for this:
y : ndarray of int, shape (n_items,) | None
For each Epoch, a number indicating the item to which it belongs. When
None, the event codes are used to differentiate between items.
Defaults to None.
Instead of producing one row per epoch, rdm_epochs
produced one row per event type, averaging across epochs of the same type before computing dissimilarity.
This is not quite what we want though.
If we want to match pixel_rdm
and facenet_rdm
, we want every single one of the 450 images to be its own stimulus type.
Turning it over to you: in the cell below, write the code necessary to construct the desired neural RDM. This is your first real challenge in this workshop. Keep the following in mind:
y
parameter of rdm_epochs
to a list that assigns each of the 879 epochs to a number from 1-450 (or 0-449) indicating which image was shown. Take care to assign number according to the order in which they appear in pixel_rdm
and facenet_rdm
.files
and filenames
variables left over from earlier that contain all the images in the proper order.epochs.metadata["file"]
column contains the filenames corresponding to the epochs.rdm_epochs
to only consider data from 0.1 to 0.2 seconds.next()
to unpack the RDM from it.epochs.metadata
trigger | file | |
---|---|---|
0 | 13 | u032.bmp |
1 | 14 | u032.bmp |
2 | 13 | u088.bmp |
3 | 13 | u084.bmp |
4 | 5 | f123.bmp |
... | ... | ... |
882 | 5 | f016.bmp |
883 | 6 | f016.bmp |
884 | 5 | f002.bmp |
885 | 6 | f002.bmp |
886 | 7 | f150.bmp |
879 rows × 2 columns
y = [list(filenames).index(f) for f in epochs.metadata.file]# compute y here
neural_rdm = next(rdm_epochs(epochs, y=y, tmin=0.1, tmax=0.2)) # compute the RDM here
# This plots your RDM
plot_rdms(neural_rdm);
If you've done it correctly, the cell below will compure RSA between the neural RDM and the pixel and FaceNet RDMs we created earlier. The RSA score will be the Spearman correlation between the RDMs, which is the default metric used in the original RSA paper.
from mne_rsa import rsa
rsa_pixel = rsa(neural_rdm, pixel_rdm, metric="spearman")
rsa_facenet = rsa(neural_rdm, facenet_rdm, metric="spearman")
print("RSA score between neural RDM and pixel RDM:", rsa_pixel)
print("RSA score between neural RDM and FaceNet RDM:", rsa_facenet)
RSA score between neural RDM and pixel RDM: 0.07869920694906636 RSA score between neural RDM and FaceNet RDM: 0.07529582461337744
The neural representation of a stimulus is different across brain regions and evolves over time. For example, we would expect that the pixel RDM would be more similar to a neural RDM that we computed across the visual cortex at an early time point, and that the FaceNET RDM might be more similar to a neural RDM that we computed at a later time point.
For the remainder of this notebook, we'll restrict the epochs
to only contain the sensors over the left occipital cortex.
picks = mne.channels.read_vectorview_selection("Left-occipital")
picks = ["".join(p.split(" ")) for p in picks]
epochs.pick(picks).pick("grad").crop(-0.1, 1)
General | ||
---|---|---|
Filename(s) | sub-02-epo.fif | |
MNE object type | EpochsFIF | |
Measurement date | 2009-04-09 at 11:04:14 UTC | |
Participant | ||
Experimenter | MEG | |
Acquisition | ||
Total number of events | 879 | |
Events counts |
face/famous/first: 147
face/famous/immediate: 78 face/famous/long: 66 face/unfamiliar/first: 149 face/unfamiliar/immediate: 65 face/unfamiliar/long: 79 scrambled/first: 150 scrambled/immediate: 71 scrambled/long: 74 |
|
Time range | -0.100 – 1.000 s | |
Baseline | -0.200 – 0.000 s | |
Sampling frequency | 220.00 Hz | |
Time points | 243 | |
Metadata | 879 rows × 2 columns | |
Channels | ||
Gradiometers | ||
Head & sensor digitization | 137 points | |
Filters | ||
Highpass | 1.00 Hz | |
Lowpass | 40.00 Hz |
In the cell below, use rdm_epochs
to compute RDMs using a sliding window by setting the temporal_radius
parameter to 0.1
seconds.
Use the entire time range (tmin=None
and tmax=None
) and leave the result as a generator (so no next()
calls).
Store the resulting generator in a variable called neural_dsms_gen
.
neural_rdms_gen = rdm_epochs(epochs, y=y, temporal_radius=0.1) # write your call to rdm_epochs() here
If you did it correctly, the cell below will consume the generator (with a nice progress bar) and plot a few of the generated RDMs:
from tqdm import tqdm
times = epochs.times[(epochs.times >= 0) & (epochs.times <= 0.9)]
neural_rdms_list = list(tqdm(neural_rdms_gen, total=len(times)))
plot_rdms(neural_rdms_list[::10], names=[f"t={t:.2f}" for t in times[::10]]);
0%| | 0/199 [00:00<?, ?it/s]
Creating temporal searchlight patches
100%|███████████████████████████████████████████████████████████████████████████████| 199/199 [00:05<00:00, 36.63it/s]
Now all that is left to do is compute RSA scored between the neural RDMs you've just created and the pixel and FaceNet RDMs.
We could do this using the rsa_gen
function, but I'd rather directly show you the rsa_epochs
function that combines computing the neural RDMs with computing the RSA scores:
from mne_rsa import rsa_epochs
rsa_epochs?
Signature: rsa_epochs( epochs, rdm_model, noise_cov=None, spatial_radius=None, temporal_radius=None, epochs_rdm_metric='correlation', epochs_rdm_params={}, rsa_metric='spearman', ignore_nan=False, y=None, n_folds=1, picks=None, tmin=None, tmax=None, dropped_as_nan=False, n_jobs=1, verbose=False, ) Docstring: Perform RSA in a searchlight pattern on epochs. The output is an Evoked object where the "signal" at each sensor is the RSA, computed using all surrounding sensors. Parameters ---------- epochs : instance of mne.Epochs The brain activity during the epochs. The event codes are used to distinguish between items. rdm_model : ndarray, shape (n, n) | (n * (n - 1) // 2,) | list of ndarray The model RDM, see :func:`compute_rdm`. For efficiency, you can give it in condensed form, meaning only the upper triangle of the matrix as a vector. See :func:`scipy.spatial.distance.squareform`. To perform RSA against multiple models at the same time, supply a list of model RDMs. Use :func:`compute_rdm` to compute RDMs. noise_cov : mne.Covariance | None When specified, the data will by normalized using the noise covariance. This is recommended in all cases, but a hard requirement when the data contains sensors of different types. Defaults to None. spatial_radius : floats | None The spatial radius of the searchlight patch in meters. All sensors within this radius will belong to the searchlight patch. Set to None to only perform the searchlight over time, flattening across sensors. Defaults to None. temporal_radius : float | None The temporal radius of the searchlight patch in seconds. Set to None to only perform the searchlight over sensors, flattening across time. Defaults to None. epochs_rdm_metric : str The metric to use to compute the RDM for the epochs. This can be any metric supported by the scipy.distance.pdist function. See also the ``epochs_rdm_params`` parameter to specify and additional parameter for the distance function. Defaults to 'correlation'. epochs_rdm_params : dict Extra arguments for the distance metric used to compute the RDMs. Refer to :mod:`scipy.spatial.distance` for a list of all other metrics and their arguments. Defaults to an empty dictionary. rsa_metric : str The RSA metric to use to compare the RDMs. Valid options are: * 'spearman' for Spearman's correlation (the default) * 'pearson' for Pearson's correlation * 'kendall-tau-a' for Kendall's Tau (alpha variant) * 'partial' for partial Pearson correlations * 'partial-spearman' for partial Spearman correlations * 'regression' for linear regression weights Defaults to 'spearman'. ignore_nan : bool Whether to treat NaN's as missing values and ignore them when computing the distance metric. Defaults to ``False``. .. versionadded:: 0.8 y : ndarray of int, shape (n_items,) | None For each Epoch, a number indicating the item to which it belongs. When ``None``, the event codes are used to differentiate between items. Defaults to ``None``. n_folds : int | sklearn.model_selection.BaseCrollValidator | None Number of cross-validation folds to use when computing the distance metric. Folds are created based on the ``y`` parameter. Specify ``None`` to use the maximum number of folds possible, given the data. Alternatively, you can pass a Scikit-Learn cross validator object (e.g. ``sklearn.model_selection.KFold``) to assert fine-grained control over how folds are created. Defaults to 1 (no cross-validation). picks : str | list | slice | None Channels to include. Slices and lists of integers will be interpreted as channel indices. In lists, channel *type* strings (e.g., ``['meg', 'eeg']``) will pick channels of those types, channel *name* strings (e.g., ``['MEG0111', 'MEG2623']`` will pick the given channels. Can also be the string values "all" to pick all channels, or "data" to pick data channels. ``None`` (default) will pick all MEG and EEG channels, excluding those maked as "bad". tmin : float | None When set, searchlight patches will only be generated from subsequent time points starting from this time point. This value is given in seconds. Defaults to ``None``, in which case patches are generated starting from the first time point. tmax : float | None When set, searchlight patches will only be generated up to and including this time point. This value is given in seconds. Defaults to ``None``, in which case patches are generated up to and including the last time point. dropped_as_nan : bool When this is set to ``True``, the drop log will be used to inject NaN values in the RDMs at the locations where a bad epoch was dropped. This is useful to ensure the dimensions of the RDM are the same, irregardless of any bad epochs that were dropped. Make sure to use ``ignore_nan=True`` when using RDMs with NaNs in them during subsequent RSA computations. Defaults to ``False``. .. versionadded:: 0.8 n_jobs : int The number of processes (=number of CPU cores) to use. Specify -1 to use all available cores. Defaults to 1. verbose : bool Whether to display a progress bar. In order for this to work, you need the tqdm python module installed. Defaults to False. Returns ------- rsa : Evoked | list of Evoked The correlation values for each searchlight patch. When spatial_radius is set to None, there will only be one virtual sensor. When temporal_radius is set to None, there will only be one time point. When multiple models have been supplied, a list will be returned containing the RSA results for each model. See Also -------- compute_rdm File: ~/micromamba/lib/python3.11/site-packages/mne_rsa/sensor_level.py Type: function
The signature of rsa_epochs
is very similar to that of dsm_epochs
.
The main difference is that we also give it the "model" RDMs, in our case the pixel and FaceNet RDMs.
rsa_epochs
will return the RSA scores as a list of mne.Evoked
objects: one for each model RDM we gave it.
Go ahead and:
epochs
gainst [pixel_rdm, facenet_rdm]
verbose=True
to activate a progress barn_jobs=-1
to use multiple CPU cores to speed things upev_rsa
ev_rsa = rsa_epochs(epochs, [pixel_rdm, facenet_rdm], y=y, temporal_radius=0.1)
Performing RSA between Epochs and 2 model RDM(s) Temporal radius: 22 samples Time interval: None-None seconds Creating temporal searchlight patches
If you did it correctly, executing the cell below will create a nice plot of the result.
ev_rsa[0].comment = "pixels"
ev_rsa[1].comment = "facenet"
mne.viz.plot_compare_evokeds(ev_rsa, picks=[0], ylim=dict(misc=[-0.02, 0.2]), show_sensors=False);
If you've made it this far, you have successfully completed your first sensor-level RSA! 🎉 This is the end of this notebook. I invite you to join me in the next notebook where we will do source level RSA.