This day introduces you to some of the applications of deep learning in neuroscience. In the intro, Aude Oliva covers the basics of convolutional neural networks trained to do image recognition and how to compare these artificial neural networks to neural activity in the brain. In the three tutorials, we apply deep learning principles in three key ways they are used in neuroscience: decoding models, encoding models, and representational similarity analysis. In each of the tutorials, we use the same neural activity, which was recorded from the visual cortex of awake mice while the mice were presented with oriented grating stimuli. In Tutorial 1, we start with even simpler neural networks consisting of fully connected linear layers. We introduce non-linear activation functions and how to optimize these deep networks using PyTorch and back-propagation. We optimize the network to decode the presented visual stimulus from the recorded neural activity in the visual cortex. Next, Tutorial 2 introduces convolutional layers, the building blocks of networks for visual tasks. The bonus in that tutorial is to fit an encoding model from visual stimuli to neural activity. Finally, in Tutorial 3, we optimize a convolutional neural network to perform an orientation discrimination task and compare the internal representations of the artificial neural networks to neural activity using a representational similarity analysis technique. In the outro, the caveats of treating neural activity like a deep convolutional neural network are introduced and explored, including approaches to make deep networks more biologically plausible. In the second optional outro, deep learning is used to perform pose estimation of infants and used to make clinical judgments.
There is a growing need for data analysis tools as neuroscientists gain the ability to record larger neural populations during more complex behaviors. Deep neural networks can approximate a wide range of non-linear functions. They can be easily fit, allowing them to be flexible model architectures for building decoding and encoding large-scale data models. Generalized linear models (GLMs) were used as decoding and encoding models in W1D3. A model that decodes a variable from neural activity can tell us how much information a brain area contains about that variable. An encoding model is a model from an input variable, like visual stimulus, to neural activity. The encoding model is meant to approximate the same transformation that the brain performs on input variables and therefore help us understand how the brain represents information.
The final application of deep neural networks in tutorial 3 is the most common one in neuroscience currently and involves comparing the activity of artificial neural networks to brain activity. Since deep convolutional neural networks are the only types of models that can perform at human accuracy on visual tasks like object recognition, it can often make sense to use them as starting points for comparison with neural data. This comparison can be done at a variety of scales, such as at the population level as in the tutorial or at the level of single neurons and single units in the deep network. This type of research can help answer questions such as what types of datasets and tasks create neural networks that best approximate the brain (e.g., neural taskonomy, in simple words inferring the similarity of task-derived representations from brain activity), and what that means for the architecture and learning rules of the brain. There are other complex tasks that deep networks are trained for that involve learning how to explore environments and determine rewarding stimuli. Stay tuned for more of this in W3D4 Reinforcement Learning.
# @title Install and import feedback gadget
!pip3 install vibecheck datatops --quiet
from vibecheck import DatatopsContentReviewContainer
def content_review(notebook_section: str):
return DatatopsContentReviewContainer(
"", # No text prompt
notebook_section,
{
"url": "https://pmyvdlilci.execute-api.us-east-1.amazonaws.com/klab",
"name": "neuromatch_cn",
"user_key": "y1x3mpx5",
},
).render()
feedback_prefix = "W1D5_Intro"
# @markdown
from ipywidgets import widgets
from IPython.display import YouTubeVideo
from IPython.display import IFrame
from IPython.display import display
class PlayVideo(IFrame):
def __init__(self, id, source, page=1, width=400, height=300, **kwargs):
self.id = id
if source == 'Bilibili':
src = f'https://player.bilibili.com/player.html?bvid={id}&page={page}'
elif source == 'Osf':
src = f'https://mfr.ca-1.osf.io/render?url=https://osf.io/download/{id}/?direct%26mode=render'
super(PlayVideo, self).__init__(src, width, height, **kwargs)
def display_videos(video_ids, W=400, H=300, fs=1):
tab_contents = []
for i, video_id in enumerate(video_ids):
out = widgets.Output()
with out:
if video_ids[i][0] == 'Youtube':
video = YouTubeVideo(id=video_ids[i][1], width=W,
height=H, fs=fs, rel=0)
print(f'Video available at https://youtube.com/watch?v={video.id}')
else:
video = PlayVideo(id=video_ids[i][1], source=video_ids[i][0], width=W,
height=H, fs=fs, autoplay=False)
if video_ids[i][0] == 'Bilibili':
print(f'Video available at https://www.bilibili.com/video/{video.id}')
elif video_ids[i][0] == 'Osf':
print(f'Video available at https://osf.io/{video.id}')
display(video)
tab_contents.append(out)
return tab_contents
video_ids = [('Youtube', 'IZvcy0Myb3M'), ('Bilibili', 'BV1sk4y1B7Ej')]
tab_contents = display_videos(video_ids, W=854, H=480)
tabs = widgets.Tab()
tabs.children = tab_contents
for i in range(len(tab_contents)):
tabs.set_title(i, video_ids[i][0])
display(tabs)
# @markdown
from IPython.display import IFrame
link_id = "jw829"
print(f"If you want to download the slides: https://osf.io/download/{link_id}/")
IFrame(src=f"https://mfr.ca-1.osf.io/render?url=https://osf.io/{link_id}/?direct%26mode=render%26action=download%26mode=render", width=854, height=480)
# @title Submit your feedback
content_review(f"{feedback_prefix}_Intro")