In this Notebook, we will demonstrate how to preprocess brain MR images with the BrainLes preprocessing package.
Many downstream tasks will require some sort of preprocessing of data:
Our BrainLes preprocessing package allows to perform preprocessing in modular and backend agnostic way, meaning different registration, brain extraction and defacing tools can be used.
This tutorial requires:
optional (but recommended):
If you installed the packages and requirements on your own machine, you can skip this section and start from the import section. Otherwise, you can follow and execute the tutorial on your browser. In order to start working on the notebook, click on the following button, this will open this page in the Colab environment and you will be able to execute the code on your own.
Now that you are visualizing the notebook in Colab, run the next cell to install the packages we will use. There are a few things you should follow in order to properly set the notebook up:
By running the next cell you are going to create a folder in your Google Drive. All the files for this tutorial will be uploaded to this folder. After the first execution you might receive some warning and notification, please follow these instructions:
Google Drive for desktop wants to access your Google Account. Click on 'Allow'.
# Create a folder in your Google Drive
# from google.colab import drive
# drive.mount('/content/drive')
# Don't run this cell if you already cloned the repo
# !git clone https://github.com/BrainLesion/tutorials.git
# make files from the repo available in colab
import sys
from pathlib import Path
COLAB_BASE_PATH = Path("/content/tutorials/preprocessing/")
sys.path.insert(0, COLAB_BASE_PATH)
%pip install brainles_preprocessing matplotlib > /dev/null
%load_ext autoreload
%autoreload 2
Note: you may need to restart the kernel to use updated packages.
from pathlib import Path
from brainles_preprocessing.defacing import QuickshearDefacer
from brainles_preprocessing.brain_extraction import HDBetExtractor
from brainles_preprocessing.modality import Modality, CenterModality
from brainles_preprocessing.preprocessor import Preprocessor
from brainles_preprocessing.registration import ANTsRegistrator
from brainles_preprocessing.normalization.percentile_normalizer import (
PercentileNormalizer,
)
import utils
/home/marcelrosier/preprocessing/brainles_preprocessing/registration/__init__.py:13: UserWarning: eReg package not found. If you want to use it, please install it using 'pip install brainles_preprocessing[ereg]'
# specify input and output paths
data_folder = Path("data/TCGA-DU-7294")
t1c_file = data_folder / "t1c.nii.gz"
t1_file = data_folder / "t1.nii.gz"
fla_file = data_folder / "fla.nii.gz"
t2_file = data_folder / "t2.nii.gz"
output_dir = Path("output")
t1c_normalized_skull_output_path = output_dir / "t1c_normalized_skull.nii.gz"
t1c_normalized_bet_output_path = output_dir / "t1c_normalized_bet.nii.gz"
t1c_normalized_defaced_output_path = output_dir / "t1c_normalized_defaced.nii.gz"
t1c_bet_mask = output_dir / "t1c_bet_mask.nii.gz"
t1c_defacing_mask = output_dir / "t1c_defacing_mask.nii.gz"
t1_normalized_bet_output_path = output_dir / "t1_normalized_bet.nii.gz"
fla_normalized_bet_output_path = output_dir / "fla_normalized_bet.nii.gz"
t2_normalized_bet_output_path = output_dir / "t2_normalized_bet.nii.gz"
Let's take a look at our input data to understand what we are working with (Note the differing resolutions)
utils.visualize_data(files=[t1c_file, t1_file, fla_file, t2_file], label="Input data")
Setup the Preprocessor by defining:
# normalizer
percentile_normalizer = PercentileNormalizer(
lower_percentile=0.1,
upper_percentile=99.9,
lower_limit=0,
upper_limit=1,
)
# define modalities
# Define the center modality, i.e. the modality to which all other modalities are co-registered has its own class
# to allow saving saving additional that are only relevant for the center modality (brain extraction (bet) and defacing masks)
center = CenterModality(
modality_name="t1c",
input_path=t1c_file,
normalizer=percentile_normalizer,
# specify desired outputs, here we want to save the normalized skull, bet and defaced images
normalized_skull_output_path=t1c_normalized_skull_output_path,
normalized_bet_output_path=t1c_normalized_bet_output_path,
normalized_defaced_output_path=t1c_normalized_defaced_output_path,
# also save the bet and defacing mask
bet_mask_output_path=t1c_bet_mask,
defacing_mask_output_path=t1c_defacing_mask,
)
# Define the moving modalities, i.e. the modalities that are co-registered to the center modality.
# They mostly have the same structure as the center modality, but do not have some additional output (brain extraction (bet) and defacing masks)
moving_modalities = [
Modality(
modality_name="t1",
input_path=t1_file,
normalizer=percentile_normalizer,
normalized_bet_output_path=t1_normalized_bet_output_path,
),
Modality(
modality_name="t2",
input_path=t2_file,
normalizer=percentile_normalizer,
normalized_bet_output_path=t2_normalized_bet_output_path,
),
Modality(
modality_name="flair",
input_path=fla_file,
normalizer=percentile_normalizer,
normalized_bet_output_path=fla_normalized_bet_output_path,
),
]
preprocessor = Preprocessor(
center_modality=center,
moving_modalities=moving_modalities,
# Use ANTs for registration, other options are Niftyreg or eReg
registrator=ANTsRegistrator(),
# Use HDBet for brain extraction
brain_extractor=HDBetExtractor(),
# Use Quickshear for defacing,
defacer=QuickshearDefacer(),
# limit cuda devices to the gpu you want to use. CPU computation is also possible but slow
limit_cuda_visible_devices="0",
)
# the first run can be slower since the model weights for brain extraction are downloaded
preprocessor.run()
[INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:52:24+0100: ============================ Starting preprocessing ============================ [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:52:24+0100: Logs are saved to /home/marcelrosier/tutorials/preprocessing/brainles_preprocessing_2024-11-07_T10-52-24.996942.log [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:52:24+0100: Received center modality: t1c and moving modalities: t1, t2, flair [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:52:24+0100: --------------------------- Starting Coregistration ---------------------------- [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:52:24+0100: Coregistering 3 moving modalities to center modality... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:52:24+0100: Registering modality t1 (file=co__t1c__t1) to center modality... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:52:36+0100: Registering modality t2 (file=co__t1c__t2) to center modality... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:52:49+0100: Registering modality flair (file=co__t1c__flair) to center modality... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:00+0100: Coregistration complete. Output saved to None [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:00+0100: ------------------------- Starting atlas registration -------------------------- [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:00+0100: Registering center modality to atlas... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:07+0100: Atlas registration complete. Output saved to /tmp/tmpy95ak7_t/atlas-space [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:07+0100: Transforming 3 moving modalities to atlas space... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:07+0100: Transforming modality t1 (file=atlas__t1) to atlas space... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:09+0100: Transforming modality t2 (file=atlas__t2) to atlas space... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:10+0100: Transforming modality flair (file=atlas__flair) to atlas space... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:12+0100: Transformations complete. Output saved to None [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:12+0100: ---------------------- Checking optional atlas correction ---------------------- [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:12+0100: Applying optional atlas correction for modality t1 [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:18+0100: Applying optional atlas correction for modality t2 [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:24+0100: Applying optional atlas correction for modality flair [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:30+0100: Saving non skull-stripped images... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:31+0100: ---------------------- Checking optional brain extraction ---------------------- [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:31+0100: Starting brain extraction... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:31+0100: Extracting brain region for center modality... /home/marcelrosier/miniconda3/envs/tutorials/lib/python3.10/site-packages/brainles_hd_bet/run.py:99: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
File: /tmp/tmpy95ak7_t/atlas-space/atlas__t1c.nii.gz preprocessing... image shape after preprocessing: (103, 160, 160) prediction (CNN id)... 0 1 2 3 4 exporting segmentation...
[INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:41+0100: Applying brain mask to t1... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:42+0100: Applying brain mask to t2... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:42+0100: Applying brain mask to flair... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:43+0100: Saving brain extracted (bet), i.e. skull-stripped images... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:44+0100: -------------------------- Checking optional defacing -------------------------- [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:44+0100: Starting defacing... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:44+0100: Defacing center modality... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:45+0100: Applying deface mask to t1c... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:45+0100: Applying deface mask to t1... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:45+0100: Applying deface mask to t2... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:45+0100: Applying deface mask to flair... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:45+0100: Saving defaced images... [INFO | brainles_preprocessing.preprocessor] 2024-11-07T10:53:46+0100: ============================ Preprocessing complete ============================
# inspect the different outputs for the center modality (normalized atlas registered with skull, brain extracted (bet) and defaced)
utils.visualize_data(
files=[
t1c_normalized_skull_output_path,
t1c_normalized_bet_output_path,
t1c_normalized_defaced_output_path,
],
label="T1C outputs",
)
# inspect the different outputs for the all modalities (normalized atlas registered brain extracted (bet))
utils.visualize_data(
files=[
t1c_normalized_bet_output_path,
t1_normalized_bet_output_path,
fla_normalized_bet_output_path,
t2_normalized_bet_output_path,
],
label="BET outputs",
)
# showcase the defacing result from a more suitable angle
utils.visualize_defacing(file=t1c_normalized_defaced_output_path)