!pip install panoptica auxiliary rich numpy > /dev/null
If you installed the packages and requirements on your own machine, you can skip this section and start from the import section.
Otherwise you can follow and execute the tutorial on your browser. In order to start working on the notebook, click on the following button, this will open this page in the Colab environment and you will be able to execute the code on your own (Google account required).
Now that you are visualizing the notebook in Colab, run the next cell to install the packages we will use. There are few things you should follow in order to properly set the notebook up:
If you run the next cell in a Google Colab environment, it will clone the 'tutorials' repository in your google drive. This will create a new folder called "tutorials" in your Google Drive. All generated file will be created/uploaded to your Google Drive respectively.
After the first execution of the next cell, you might receive some warnings and notifications, please follow these instructions:
Afterwards the "tutorials" folder has been created. You can navigate it through the lefthand panel in Colab. You might also have received an email that informs you about the access on your Google Drive.
import sys
# Check if we are in google colab currently
try:
import google.colab
colabFlag = True
except ImportError as r:
colabFlag = False
# Execute certain steps only if we are in a colab environment
if colabFlag:
# Create a folder in your Google Drive
from google.colab import drive
drive.mount("/content/drive")
# clone repository and set path
!git clone https://github.com/BrainLesion/tutorials.git /content/drive/MyDrive/tutorials
BASE_PATH = "/content/drive/MyDrive/tutorials/panoptica"
sys.path.insert(0, BASE_PATH)
else: # normal jupyter notebook environment
BASE_PATH = "." # current working directory would be BraTs-Toolkit anyways if you are not in colab
from auxiliary.nifti.io import read_nifti
from rich import print as pprint
from panoptica import (
InputType,
Panoptica_Evaluator,
ConnectedComponentsInstanceApproximator,
NaiveThresholdMatching,
)
No module named 'pandas' OPTIONAL PACKAGE MISSING
To demonstrate we use a reference and predicition of spine a segmentation without instances.
ref_masks = read_nifti(f"{BASE_PATH}/spine_seg/semantic/ref.nii.gz")
pred_masks = read_nifti(f"{BASE_PATH}/spine_seg/semantic/pred.nii.gz")
To use your own data please replace the example data with your own data.
In ordner to successfully load your data please use NIFTI files and the following file designation within the "semantic" folder:
panoptica/spine_seg/semantic/
evaluator = Panoptica_Evaluator(
expected_input=InputType.SEMANTIC,
instance_approximator=ConnectedComponentsInstanceApproximator(),
instance_matcher=NaiveThresholdMatching(),
)
The results object allows access to individual metrics and provides helper methods for further processing
# print all results
result, intermediate_steps_data = evaluator.evaluate(
pred_masks, ref_masks, verbose=False
)["ungrouped"]
print(result)
────────────────────────────────────────── Thank you for using panoptica ──────────────────────────────────────────
Please support our development by citing
https://github.com/BrainLesion/panoptica#citation -- Thank you!
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────
+++ MATCHING +++ Number of instances in reference (num_ref_instances): 87 Number of instances in prediction (num_pred_instances): 89 True Positives (tp): 73 False Positives (fp): 16 False Negatives (fn): 14 Recognition Quality / F1-Score (rq): 0.8295454545454546 +++ GLOBAL +++ Global Binary Dice (global_bin_dsc): 0.9731641527805414 +++ INSTANCE +++ Segmentation Quality IoU (sq): 0.7940127477906024 +- 0.11547745015679488 Panoptic Quality IoU (pq): 0.6586696657808406 Segmentation Quality Dsc (sq_dsc): 0.8802182546605446 +- 0.07728416427007166 Panoptic Quality Dsc (pq_dsc): 0.7301810521615881 Segmentation Quality ASSD (sq_assd): 0.20573710924944655 +- 0.13983482367660682 Segmentation Quality Relative Volume Difference (sq_rvd): 0.01134021986061723 +- 0.1217805112447998
# get specific metric, e.g. pq
pprint(f"{result.pq=}")
result.pq=0.6586696657808406
# get dict for further processing, e.g. for pandas
pprint("results dict: ", result.to_dict())
results dict: { 'num_ref_instances': 87, 'num_pred_instances': 89, 'tp': 73, 'fp': 16, 'fn': 14, 'prec': 0.8202247191011236, 'rec': 0.8390804597701149, 'rq': 0.8295454545454546, 'sq': 0.7940127477906024, 'sq_std': 0.11547745015679488, 'pq': 0.6586696657808406, 'sq_dsc': 0.8802182546605446, 'sq_dsc_std': 0.07728416427007166, 'pq_dsc': 0.7301810521615881, 'sq_assd': 0.20573710924944655, 'sq_assd_std': 0.13983482367660682, 'sq_rvd': 0.01134021986061723, 'sq_rvd_std': 0.1217805112447998, 'global_bin_dsc': 0.9731641527805414 }
# To inspect different phases, just use the returned intermediate_steps_data object
import numpy as np
intermediate_steps_data.original_prediction_arr # yields input prediction array
intermediate_steps_data.original_reference_arr # yields input reference array
intermediate_steps_data.prediction_arr(
InputType.MATCHED_INSTANCE
) # yields prediction array after instances have been matched
intermediate_steps_data.reference_arr(
InputType.MATCHED_INSTANCE
) # yields reference array after instances have been matched
# This works with all InputType
for i in InputType:
print(i)
pred = intermediate_steps_data.prediction_arr(i)
ref = intermediate_steps_data.reference_arr(i)
print("Prediction array shape =", pred.shape, "unique_values=", np.unique(pred))
print("Reference array shape =", ref.shape, "unique_values=", np.unique(ref))
print()
InputType.SEMANTIC Prediction array shape = (170, 512, 17) unique_values= [ 0 26 41 42 43 44 45 46 47 48 49 60 61 62 100] Reference array shape = (170, 512, 17) unique_values= [ 0 26 41 42 43 44 45 46 47 48 49 60 61 62 100] InputType.UNMATCHED_INSTANCE Prediction array shape = (170, 512, 17) unique_values= [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89] Reference array shape = (170, 512, 17) unique_values= [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87] InputType.MATCHED_INSTANCE Prediction array shape = (170, 512, 17) unique_values= [ 0 1 2 3 4 6 7 8 9 10 12 13 14 15 16 17 18 19 20 22 23 25 26 31 33 34 35 38 40 41 42 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103] Reference array shape = (170, 512, 17) unique_values= [ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87]