In this Notebook we demonstrate how to utilize the BraTS package to use top performing algorithms from recent BraTS challenges.
# Installations
!pip install brats matplotlib ipywidgets > /dev/null
%load_ext autoreload
%autoreload 2
The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload
If you installed the packages and requirements on your own machine, you can skip this section and start from the import section.
Otherwise you can follow and execute the tutorial on your browser. In order to start working on the notebook, click on the following button, this will open this page in the Colab environment and you will be able to execute the code on your own (Google account required).
Now that you are visualizing the notebook in Colab, run the next cell to install the packages we will use. There are few things you should follow in order to properly set the notebook up:
If you run the next cell in a Google Colab environment, it will clone the 'tutorials' repository in your google drive. This will create a new folder called "tutorials" in your Google Drive. All generated file will be created/uploaded to your Google Drive respectively.
After the first execution of the next cell, you might receive some warnings and notifications, please follow these instructions:
Afterwards the "tutorials" folder has been created. You can navigate it through the lefthand panel in Colab. You might also have received an email that informs you about the access on your Google Drive.
import sys
# Check if we are in google colab currently
try:
import google.colab
colabFlag = True
except ImportError as r:
colabFlag = False
# Execute certain steps only if we are in a colab environment
if colabFlag:
# Create a folder in your Google Drive
from google.colab import drive
drive.mount("/content/drive")
# clone repository and set path
!git clone https://github.com/BrainLesion/tutorials.git /content/drive/MyDrive/tutorials
BASE_PATH = "/content/drive/MyDrive/tutorials/BraTS/"
sys.path.insert(0, BASE_PATH)
else: # normal jupyter notebook environment
BASE_PATH = "./" # current working directory would be BraTs-Toolkit anyways if you are not in colab
from pathlib import Path
from brats import AdultGliomaSegmenter
from brats.utils.constants import AdultGliomaAlgorithms
import utils # local file
brats
expects preprocessed input data as NIfTI files (preprocessed meaning the files should be co-registerend, skullstripped and in SRI-24 space).
In this example we provide:
BraTS/data/segmentation
(data from RSNA-ASNR-MICCAI BraTS Continuous Evaluation Challenge)BraTS/data/inpainting
(data from ASNR-MICCAI BraTS Local Synthesis of Tissue via Inpainting)To get an intuition of the data, one example slice of the 3D scans is visualized below for a set of segementation (t1n, t1c, t2f, t2w) and inpainting (t1n, mask) inputs
subject = "BraTS-GLI-00001-000"
segmentation_data_path = Path(BASE_PATH) / "data" / "segmentation"
inpainting_data_path = Path(BASE_PATH) / "data" / "inpainting"
segmentation_subject_path = segmentation_data_path / subject
inpainting_subject_path = inpainting_data_path / subject
utils.visualize_segmentation_data(segmentation_data_path, subject_id=subject)
utils.visualize_inpainting_data(inpainting_data_path, subject_id=subject)
If the data is not preprocessed yet, consider using our BrainLes preprocessing package (or its predecessor BraTS-Toolkit).
segmenter = AdultGliomaSegmenter()
segmenter.infer_single(
t1c=segmentation_subject_path / f"{subject}-t1c.nii.gz",
t1n=segmentation_subject_path / f"{subject}-t1n.nii.gz",
t2f=segmentation_subject_path / f"{subject}-t2f.nii.gz",
t2w=segmentation_subject_path / f"{subject}-t2w.nii.gz",
output_file="segmentation.nii.gz",
)
2024-08-30 15:03:26.528 | INFO | brats.core.brats_algorithm:__init__:46 - Instantiated AdultGliomaSegmenter with algorithm: BraTS23_1 by André Ferreira, et al. 2024-08-30 15:03:26.529 | INFO | brats.core.brats_algorithm:_infer_single:121 - Performing single inference 2024-08-30 15:03:26.539 | INFO | brats.core.docker:_log_algorithm_info:276 - Running algorithm: BraTS23 Adult Glioma Segmentation [1st place] 2024-08-30 15:03:26.540 | INFO | brats.core.docker:_log_algorithm_info:279 - (Paper) Consider citing the corresponding paper: https://arxiv.org/abs/2402.17317v1 by André Ferreira, et al. 2024-08-30 15:03:27.157 | INFO | brats.core.docker:run_container:329 - Starting inference 2024-08-30 15:06:56.147 | INFO | brats.core.docker:run_container:349 - Finished inference in 208.99 seconds 2024-08-30 15:06:56.150 | INFO | brats.core.brats_algorithm:_infer_single:144 - Saved output to: /home/ivan_marcel/tutorials/BraTS/segmentation.nii.gz
utils.visualize_segmentation(
modality_file=segmentation_subject_path / f"{subject}-t1c.nii.gz",
segmentation_file="segmentation.nii.gz",
)
BraTS allows to run an algorithm for a single set of input images (t1n, t1c, t2f, t2w of the same patient) or for multiple subjects. Each of the available classes provides methods for both:
.infer_single(...)
that takes in the paths to the required input modalities and a path to store the result.infer_batch(...)
that takes in a path to a data folder containing multiple sets of subjects and a path to an output folder to store the resultsThe sets of subject inputs need to be stored in a specific structure to be recognized by the package:
data_folder
┣ A
┃ ┣ A-t1c.nii.gz
┃ ┣ A-t1n.nii.gz
┃ ┣ A-t2f.nii.gz
┃ ┗ A-t2w.nii.gz
┣ B
┃ ┣ B-t1c.nii.gz
┃ ┣ ...
output_path = Path("batch_out")
segmenter = AdultGliomaSegmenter()
segmenter.infer_batch(
data_folder=segmentation_data_path,
output_folder=output_path,
)
print([path.name for path in output_path.iterdir()])
2024-08-30 15:08:43.567 | INFO | brats.core.brats_algorithm:__init__:46 - Instantiated AdultGliomaSegmenter with algorithm: BraTS23_1 by André Ferreira, et al. 2024-08-30 15:08:43.569 | INFO | brats.core.brats_algorithm:_infer_batch:163 - Found 2 subjects: BraTS-GLI-00001-001, BraTS-GLI-00001-000 2024-08-30 15:08:43.594 | INFO | brats.core.brats_algorithm:_infer_batch:172 - Standardized input names to match algorithm requirements. 2024-08-30 15:08:43.595 | INFO | brats.core.docker:_log_algorithm_info:276 - Running algorithm: BraTS23 Adult Glioma Segmentation [1st place] 2024-08-30 15:08:43.595 | INFO | brats.core.docker:_log_algorithm_info:279 - (Paper) Consider citing the corresponding paper: https://arxiv.org/abs/2402.17317v1 by André Ferreira, et al. 2024-08-30 15:08:44.146 | INFO | brats.core.docker:run_container:329 - Starting inference 2024-08-30 15:12:50.584 | INFO | brats.core.docker:run_container:349 - Finished inference in 246.44 seconds 2024-08-30 15:12:50.586 | INFO | brats.core.brats_algorithm:_infer_batch:189 - Saved outputs to: /home/ivan_marcel/tutorials/BraTS/batch_out
['BraTS-GLI-00001-000.nii.gz', 'BraTS-GLI-00001-001.nii.gz']
By default the algorithm that won the most recent challenge will be run on the first available GPU. This behavior and other options can be adapted, e.g.:
algorithm
parametercuda_decives
parameterforce_cpu
flag (will cause an exception for many algorithms since many do not support CPU execution)log_file
parametersegmenter = AdultGliomaSegmenter(
algorithm=AdultGliomaAlgorithms.BraTS23_3, # Use the 3rd placed algorithm of the Adult Glioma BraTS 2023 challenge
cuda_devices="4", # Select GPU device with ID 4
force_cpu=False, # default, could be set to True to force CPU
)
segmenter.infer_single(
t1c=segmentation_subject_path / f"{subject}-t1c.nii.gz",
t1n=segmentation_subject_path / f"{subject}-t1n.nii.gz",
t2f=segmentation_subject_path / f"{subject}-t2f.nii.gz",
t2w=segmentation_subject_path / f"{subject}-t2w.nii.gz",
output_file="segmentation.nii.gz",
log_file="segmentation.log", # Save the logs in a new filed called `segmentation.log`
)
2024-08-30 15:14:13.367 | INFO | brats.core.brats_algorithm:__init__:46 - Instantiated AdultGliomaSegmenter with algorithm: BraTS23_3 by Fadillah Adamsyah Maani, et al. 2024-08-30 15:14:13.371 | INFO | brats.utils.data_handling:add_log_file_handler:41 - Logging console logs and further debug information to: /home/ivan_marcel/tutorials/BraTS/segmentation.log 2024-08-30 15:14:13.372 | INFO | brats.core.brats_algorithm:_infer_single:121 - Performing single inference 2024-08-30 15:14:13.383 | INFO | brats.core.docker:_log_algorithm_info:276 - Running algorithm: BraTS23 Adult Glioma Segmentation [3rd place] 2024-08-30 15:14:13.384 | INFO | brats.core.docker:_log_algorithm_info:279 - (Paper) Consider citing the corresponding paper: N/A by Fadillah Adamsyah Maani, et al. 2024-08-30 15:14:13.803 | INFO | brats.utils.zenodo:check_additional_files_path:48 - Model weights not found locally 2024-08-30 15:14:13.804 | INFO | brats.utils.zenodo:_download_additional_files:148 - Downloading model weights from Zenodo. This might take a while... 2024-08-30 15:15:33.438 | INFO | brats.utils.zenodo:_download_additional_files:160 - Zip file extracted successfully to /home/ivan_marcel/miniconda3/envs/tutorials/lib/python3.10/site-packages/brats/data/additional_files/11573315_v1.0.1 2024-08-30 15:15:33.829 | INFO | brats.core.docker:run_container:329 - Starting inference 2024-08-30 15:21:19.623 | INFO | brats.core.docker:run_container:349 - Finished inference in 345.79 seconds 2024-08-30 15:21:19.625 | INFO | brats.core.brats_algorithm:_infer_single:144 - Saved output to: /home/ivan_marcel/tutorials/BraTS/segmentation.nii.gz
BraTS provides the algorithms from all available recent BraTS Challenges, i.e.:
The package provides a separate class and algorithm constants for each of the challenges.
The examples above were demonstrated using the class and constants of the Adult Glioma Segmentation challenge.
In an identical way you can use:
MeningiomaSegmenter
class with MeningiomaAlgorithms
PediatricSegmenter
class with PediatricAlgorithms
A full overview of all available algorithms can be found in the projects Readme here.
# e.g. for the Meningioma Algorithms
from brats import MeningiomaSegmenter
from brats.utils.constants import MeningiomaAlgorithms
segmenter = MeningiomaSegmenter(
algorithm=MeningiomaAlgorithms.BraTS23_2, cuda_devices="1"
)
segmenter.infer_batch(
data_folder=segmentation_data_path, output_folder="men_output", log_file="test.log"
)
2024-08-30 15:24:09.444 | INFO | brats.core.brats_algorithm:__init__:46 - Instantiated MeningiomaSegmenter with algorithm: BraTS23_2 by Ziyan Huang, et al. 2024-08-30 15:24:09.448 | INFO | brats.utils.data_handling:add_log_file_handler:41 - Logging console logs and further debug information to: /home/ivan_marcel/tutorials/BraTS/test.log 2024-08-30 15:24:09.449 | INFO | brats.core.brats_algorithm:_infer_batch:163 - Found 2 subjects: BraTS-GLI-00001-001, BraTS-GLI-00001-000 2024-08-30 15:24:09.478 | INFO | brats.core.brats_algorithm:_infer_batch:172 - Standardized input names to match algorithm requirements. 2024-08-30 15:24:09.479 | INFO | brats.core.docker:_log_algorithm_info:276 - Running algorithm: BraTS23 Meningioma Segmentation [2nd place] 2024-08-30 15:24:09.479 | INFO | brats.core.docker:_log_algorithm_info:279 - (Paper) Consider citing the corresponding paper: N/A by Ziyan Huang, et al. 2024-08-30 15:24:10.041 | INFO | brats.core.docker:run_container:329 - Starting inference 2024-08-30 15:24:56.671 | INFO | brats.core.docker:run_container:349 - Finished inference in 46.63 seconds 2024-08-30 15:24:56.673 | INFO | brats.core.brats_algorithm:_infer_batch:189 - Saved outputs to: /home/ivan_marcel/tutorials/BraTS/men_output
The Inpainting algorithms have a mostly identical interface except for the obvious change of input files.
Everything else remains the same.
from brats import Inpainter
inpainter = Inpainter()
inpainter.infer_single(
t1n=inpainting_subject_path / f"{subject}-t1n-voided.nii.gz",
mask=inpainting_subject_path / f"{subject}-mask.nii.gz",
output_file="inpainting.nii.gz",
)
2024-08-30 15:32:45.405 | INFO | brats.core.brats_algorithm:__init__:46 - Instantiated Inpainter with algorithm: BraTS23_1 by Juexin Zhang, et al. 2024-08-30 15:32:45.406 | INFO | brats.core.brats_algorithm:_infer_single:121 - Performing single inference 2024-08-30 15:32:45.410 | INFO | brats.core.docker:_log_algorithm_info:276 - Running algorithm: BraTS23 Inpainting [1st place] 2024-08-30 15:32:45.411 | INFO | brats.core.docker:_log_algorithm_info:279 - (Paper) Consider citing the corresponding paper: N/A by Juexin Zhang, et al. 2024-08-30 15:32:45.785 | INFO | brats.utils.zenodo:check_additional_files_path:56 - Found downloaded local weights: 13382922_v1.0.1 2024-08-30 15:32:45.786 | INFO | brats.utils.zenodo:check_additional_files_path:66 - Latest model weights (13382922_v1.0.1) are already present. 2024-08-30 15:32:46.179 | INFO | brats.core.docker:run_container:329 - Starting inference 2024-08-30 15:33:01.399 | INFO | brats.core.docker:run_container:349 - Finished inference in 15.22 seconds 2024-08-30 15:33:01.401 | INFO | brats.core.brats_algorithm:_infer_single:144 - Saved output to: /home/ivan_marcel/tutorials/BraTS/inpainting.nii.gz
utils.visualize_inpainting(
t1n_voided=inpainting_subject_path / f"{subject}-t1n-voided.nii.gz",
prediction="inpainting.nii.gz",
)
Batch inference can be used in the same way, but expects an adapted structure of the data folder:
data_folder
┣ A
┃ ┣ A-t1n-voided.nii.gz
┃ ┣ A-mask.nii.gz
┣ B
┃ ┣ B-t1n-voided.nii.gz
┃ ┣ ...
output_path = Path("inpainting_batch_out")
inpainter = Inpainter()
inpainter.infer_batch(
data_folder=inpainting_data_path,
output_folder=output_path,
)
print([path.name for path in output_path.iterdir()])
2024-08-30 15:29:59.806 | INFO | brats.core.brats_algorithm:__init__:46 - Instantiated Inpainter with algorithm: BraTS23_1 by Juexin Zhang, et al. 2024-08-30 15:29:59.808 | INFO | brats.core.brats_algorithm:_infer_batch:163 - Found 1 subjects: BraTS-GLI-00001-000 2024-08-30 15:29:59.813 | INFO | brats.core.brats_algorithm:_infer_batch:172 - Standardized input names to match algorithm requirements. 2024-08-30 15:29:59.814 | INFO | brats.core.docker:_log_algorithm_info:276 - Running algorithm: BraTS23 Inpainting [1st place] 2024-08-30 15:29:59.815 | INFO | brats.core.docker:_log_algorithm_info:279 - (Paper) Consider citing the corresponding paper: N/A by Juexin Zhang, et al. 2024-08-30 15:30:00.250 | INFO | brats.utils.zenodo:check_additional_files_path:56 - Found downloaded local weights: 13382922_v1.0.1 2024-08-30 15:30:00.251 | INFO | brats.utils.zenodo:check_additional_files_path:66 - Latest model weights (13382922_v1.0.1) are already present. 2024-08-30 15:30:00.645 | INFO | brats.core.docker:run_container:329 - Starting inference 2024-08-30 15:30:15.967 | INFO | brats.core.docker:run_container:349 - Finished inference in 15.32 seconds 2024-08-30 15:30:15.969 | INFO | brats.core.brats_algorithm:_infer_batch:189 - Saved outputs to: /home/ivan_marcel/tutorials/BraTS/inpainting_batch_out
['BraTS-GLI-00001-000.nii.gz']