In this Notebook we will demonstrate how to use the AURORA package to perform segmentation on cancer metastasis in brain MRI.
If you installed the packages and requirments on your own machine, you can skip this section and start from the import section. Otherwise you can follow and execute the tutorial on your browser. In order to start working on the notebook, click on the following button, this will open this page in the Colab environment and you will be able to execute the code on your own.
Now that you are visualizing the notebook in Colab, run the next cell to install the packages we will use. There are few things you should follow in order to properly set the notebook up:
!pip install brainles_aurora matplotlib
%load_ext autoreload
%autoreload 2
By running the next cell you are going to create a folder in your Google Drive. All the files for this tutorial will be uploaded to this folder. After the first execution you might receive some warning and notification, please follow these instructions:
Google Drive for desktop wants to access your Google Account. Click on 'Allow'.
# Create a folder in your Google Drive
# from google.colab import drive
# drive.mount('/content/drive')
# Don't run this cell if you already cloned the repo
# !git clone https://github.com/BrainLesion/tutorials.git
# make files from the repo available in colab
import sys
COLAB_BASE_PATH = "/content/tutorials/AURORA/"
sys.path.insert(0, COLAB_BASE_PATH)
from brainles_aurora.inferer import AuroraGPUInferer, AuroraInferer, AuroraInfererConfig
import nibabel as nib
import numpy as np
import utils # local file
AURORA expects preprocessed input data as NIfTI file or NumPy Array (preprocessed meaning the files should be co-registerend, skullstripped and in SRI-24 space).
In this example we provide sample data from the ASNR-MICCAI BraTS Brain Metastasis Challenge, which is already preprocessed in the AURORA/data
folder in the form of 4 modalities of the same brain (T1, T1C, T2, FLAIR). To get an intuition of the data, one example slice of the 3D scans is visualized below.
For your own data: If the data is not preprocessed yet, consider using our BrainLes preprocessing package (or its predecessor BraTS-Toolkit).
utils.visualize_data()
# We first need to create an instance of the AuroraInfererConfig class, which will hold the configuration for the inferer. We can then create an instance of the AuroraInferer class, which will be used to perform the inference.
config = AuroraInfererConfig(
tta=False, # we disable test time augmentations for a quick demo, should be set to True for better results
sliding_window_batch_size=4, # The batch size used for the sliding window inference, decrease if you run out of memory (warning: too small batches might lead to unstable results)
)
# Now that we have the configuration we can create an instance of the AuroraInferer class. This class will be used to perform the inference. We can then call the infer method to perform the inference.
# If you don-t have a GPU that supports CUDA use the CPU version uncomment this and comment the GPU inferer
# inferer = AuroraInferer(config=config)
inferer = AuroraGPUInferer(
config=config,
cuda_devices="0", # optional, if you have multiple GPUs you can specify which one to use
)
# The infer method takes the path to the T1c MRI file and the path to the output segmentation file as arguments. The output segmentation file will be created by the infer method and will contain the segmentation of the input T1c MRI.
# The example below shows how to perform the inference using a T1c MRI file:
_ = inferer.infer(
t1c="data/t1c.nii.gz",
segmentation_file="output/t1c_segmentation.nii.gz",
)
2024-02-06 15:54:36 INFO: Initialized AuroraGPUInferer with config: AuroraInfererConfig(log_level=20, tta=False, sliding_window_batch_size=4, workers=0, threshold=0.5, sliding_window_overlap=0.5, crop_size=(192, 192, 32), model_selection=<ModelSelection.BEST: 'best'>) 2024-02-06 15:54:36 INFO: Set torch device: cuda 2024-02-06 15:54:36 INFO: Infer with config: AuroraInfererConfig(log_level=20, tta=False, sliding_window_batch_size=4, workers=0, threshold=0.5, sliding_window_overlap=0.5, crop_size=(192, 192, 32), model_selection=<ModelSelection.BEST: 'best'>) and device: cuda 2024-02-06 15:54:36 INFO: Successfully validated input images. Input mode: NIFTI_FILEPATH 2024-02-06 15:54:36 INFO: Received files: T1: False, T1C: True, T2: False, FLAIR: False 2024-02-06 15:54:36 INFO: Inference mode: t1c-o 2024-02-06 15:54:36 INFO: No loaded compatible model found. Loading Model and weights 2024-02-06 15:54:36 INFO: Setting up Dataloader 2024-02-06 15:54:36 INFO: Running inference on device := cuda
BasicUNet features: (32, 32, 64, 128, 256, 32).
2024-02-06 15:54:39 INFO: Post-processing data 2024-02-06 15:54:43 INFO: Saving post-processed data as NIFTI files 2024-02-06 15:54:43 INFO: Saved segmentation to output/t1c_segmentation.nii.gz 2024-02-06 15:54:43 INFO: Returning post-processed data as Dict of Numpy arrays 2024-02-06 15:54:43 INFO: Finished inference
utils.visualize_segmentation(
modality_file="data/t1c.nii.gz",
segmentation_file="output/t1c_segmentation.nii.gz",
)
AURORA also supports different combinations of multi-modal MRI files (see manuscript). It will automatically select a suitable model depending on the inputs supplied.
The example below shows how to perform the inference using multi-modal inputs.
config = AuroraInfererConfig() # Use default config
# If you don-t have a GPU that supports CUDA use the CPU version: AuroraInferer(config=config)
inferer = AuroraGPUInferer(
config=config,
)
# Use all four input modalities,we also create other outputs and a custom log file
_ = inferer.infer(
t1="data/t1n.nii.gz",
t1c="data/t1c.nii.gz",
t2="data/t2w.nii.gz",
fla="data/t2f.nii.gz",
segmentation_file="output/multi-modal_segmentation.nii.gz",
# The unbinarized network outputs for the whole tumor channel (edema + enhancing tumor core + necrosis) channel
whole_tumor_unbinarized_floats_file="output/whole_tumor_unbinarized_floats.nii.gz",
# The unbinarized network outputs for the metastasis (tumor core) channel
metastasis_unbinarized_floats_file="output/metastasis_unbinarized_floats.nii.gz",
log_file="output/custom_logfile.log",
)
2024-02-06 09:12:39 INFO: Initialized AuroraGPUInferer with config: AuroraInfererConfig(log_level=20, tta=True, sliding_window_batch_size=1, workers=0, threshold=0.5, sliding_window_overlap=0.5, crop_size=(192, 192, 32), model_selection=<ModelSelection.BEST: 'best'>) 2024-02-06 09:12:39 INFO: Set torch device: cuda 2024-02-06 09:12:39 INFO: Infer with config: AuroraInfererConfig(log_level=20, tta=True, sliding_window_batch_size=1, workers=0, threshold=0.5, sliding_window_overlap=0.5, crop_size=(192, 192, 32), model_selection=<ModelSelection.BEST: 'best'>) and device: cuda 2024-02-06 09:12:39 INFO: Successfully validated input images. Input mode: NIFTI_FILEPATH 2024-02-06 09:12:39 INFO: Received files: T1: True, T1C: True, T2: True, FLAIR: True 2024-02-06 09:12:39 INFO: Inference mode: t1-t1c-t2-fla 2024-02-06 09:12:39 INFO: No loaded compatible model found. Loading Model and weights 2024-02-06 09:12:39 INFO: Setting up Dataloader 2024-02-06 09:12:39 INFO: Running inference on device := cuda
BasicUNet features: (32, 32, 64, 128, 256, 32).
2024-02-06 09:12:46 INFO: Applying test time augmentations 2024-02-06 09:14:14 INFO: Post-processing data 2024-02-06 09:14:14 INFO: Saving post-processed data as NIFTI files 2024-02-06 09:14:14 INFO: Saved segmentation to output/multi-modal_segmentation.nii.gz 2024-02-06 09:14:14 INFO: Saved whole_network to output/whole_tumor_unbinarized_floats.nii.gz 2024-02-06 09:14:15 INFO: Saved metastasis_network to output/metastasis_unbinarized_floats.nii.gz 2024-02-06 09:14:15 INFO: Returning post-processed data as Dict of Numpy arrays 2024-02-06 09:14:15 INFO: Finished inference
config = AuroraInfererConfig()
# AuroraInferer(config=config) # If you don-t have a GPU that supports CUDA use the CPU version (uncomment this and comment the GPU inferer)
inferer = AuroraGPUInferer(config=config)
# we load the nifty data to a numpy array
t1_np = nib.load("data/t1n.nii.gz").get_fdata()
# we can now use the inferer to perform the inference and obtain again a numpy array containing the segmentation
results = inferer.infer(t1=t1_np)
print([f"{k} : {v.shape}" for k, v in results.items()])
2024-02-06 09:16:17 INFO: Initialized AuroraGPUInferer with config: AuroraInfererConfig(log_level=20, tta=True, sliding_window_batch_size=1, workers=0, threshold=0.5, sliding_window_overlap=0.5, crop_size=(192, 192, 32), model_selection=<ModelSelection.BEST: 'best'>) 2024-02-06 09:16:17 INFO: Set torch device: cuda 2024-02-06 09:16:17 INFO: Infer with config: AuroraInfererConfig(log_level=20, tta=True, sliding_window_batch_size=1, workers=0, threshold=0.5, sliding_window_overlap=0.5, crop_size=(192, 192, 32), model_selection=<ModelSelection.BEST: 'best'>) and device: cuda 2024-02-06 09:16:17 INFO: Successfully validated input images. Input mode: NP_NDARRAY 2024-02-06 09:16:17 INFO: Received files: T1: True, T1C: False, T2: False, FLAIR: False 2024-02-06 09:16:17 INFO: Inference mode: t1-o 2024-02-06 09:16:17 INFO: No loaded compatible model found. Loading Model and weights 2024-02-06 09:16:17 INFO: Setting up Dataloader 2024-02-06 09:16:17 INFO: Running inference on device := cuda
BasicUNet features: (32, 32, 64, 128, 256, 32).
2024-02-06 09:16:23 INFO: Applying test time augmentations 2024-02-06 09:17:47 INFO: Post-processing data 2024-02-06 09:17:47 INFO: Returning post-processed data as Dict of Numpy arrays 2024-02-06 09:17:47 INFO: Finished inference
['segmentation : (240, 240, 155)', 'whole_network : (240, 240, 155)', 'metastasis_network : (240, 240, 155)']
# now we can use the capabilities of numpy without having to re-read a nifti file, for example we could compute the number of metastasis voxels (the volume of the metastasis) as follows:
whole_metastasis_voxels = results["segmentation"] > 0
print("metasis volume (including edema)", np.count_nonzero(whole_metastasis_voxels))