The nnDetection framework is geared towards medical object detection [2]. In its native form, nnDetection will predict bounding boxes, which may overlap. The PI-CAI challenge requires detection maps of non-touching lesion candidates, so we will transform the bounding box predictions to detection maps after inference with nnDetection. Setting up nnDetection and tweaking its implementation is not as straightforward as for the nnUNet or UNet baselines, but it can provide a strong csPCa detection model. Interested readers who would like to modify the implementation of nnDetection are referred to the nnDetection documentation. We only provide training and evaluation steps with the vanilla nnDetection framework.
To run nnDetection commands, you can use the Docker specified in nndetection/training_docker/
. This is a wrapper around nnDetection, and facilitates training in a Docker container on a distributed system.
To build the Docker container, navigate to nndetection/training_docker/
and build the container:
cd src/picai_baseline/nndetection/training_docker/
docker build . --tag joeranbosma/picai_nndetection:latest
This will result (if ran successfully) in the Docker container named joeranbosma/picai_nndetection:latest
. Alternatively, the pre-built Docker container can be loaded:
docker pull joeranbosma/picai_nndetection:latest
We use the nnUNet Raw Data Archive format as starting point, as obtained after the steps in Data Preprocessing. Because all lesions are non-touching, the nnUNet Raw Data Archive can be converted to the nnDetection Raw Data Archive format unabmiguously. After finishing the steps described in Data Preprocessing, the nnUNet Raw Data Archive can be converted into the nnDetection raw data archive using the following command:
python -m picai_prep nnunet2nndet \
--input /workdir/nnUNet_raw_data/Task2201_picai_baseline \
--output /workdir/nnDet_raw_data/Task2201_picai_baseline
Alternatively, you can use Docker to run the Python script:
docker run --cpus=2 --memory=16gb --rm \
-v /path/to/workdir/:/workdir/ \
joeranbosma/picai_nnunet:latest python3 -m picai_prep nnunet2nndet --input /workdir/nnUNet_raw_data/Task2201_picai_baseline --output /workdir/nnDet_raw_data/Task2201_picai_baseline
nnDetection also requires user-defined cross-validation splits to ensure there is no patient overlap between training and validation splits. The official cross-validation splits can be stored to the working directory using the steps in nnU-Net - Cross-Validation Splits.
We advice to export the cross-validation splits as individual files for usage with picai_eval
. To achieve this, please follow the steps in Cross-Validation Splits, or run this with Docker:
docker run --cpus=1 --memory=4gb -it --rm \
-v /path/to/workdir:/workdir \
joeranbosma/picai_nndetection:latest \
python -m picai_baseline.splits.picai_nnunet --output "/workdir/splits/picai_nnunet"
Running the first fold will start with preprocessing the raw images. After preprocessing is done, it will automatically start training.
docker run --cpus=8 --memory=32gb --shm-size=32gb --gpus='"device=6"' -it --rm \
-v /path/to/workdir:/workdir \
joeranbosma/picai_nndetection:latest nndet prep_train \
Task2201_picai_baseline /workdir/ \
--custom_split /workdir/nnUNet_raw_data/Task2201_picai_baseline/splits.json \
--fold 0
After preprocessing is done, the other folds can be run sequentially or in parallel with the first fold (change to --fold 1
, etc.)
Note: runs in our environment with 32 GB RAM, 8 CPUs, 1 GPU with 8 GB VRAM. Takes about 1 day per fold on a RTX 2080 Ti.
Before inference with nnDetection, consolidate the models first. See nnDetection's documentation for details. With the picai_nndet
Docker container, models can be consolidated using the following command:
docker run --cpus=8 --memory=32gb --shm-size=32gb --gpus='"device=0"' -it --rm \
-v /path/to/workdir:/workdir \
joeranbosma/picai_nndetection:latest nndet consolidate \
Task2201_picai_baseline RetinaUNetV001_D3V001_3d /workdir \
--sweep_boxes \
--results /workdir/results/nnDet \
--custom_split /workdir/nnUNet_raw_data/Task2201_picai_baseline/splits.json
This will generate an inference plan for model deployment. Additionally, this will generate cross-validation predictions of bounding boxes, which can be used for internal model development. See nnDetection - Evaluation for instruction on how to evaluate these predictions in the context of the PI-CAI Challenge.
To predict unseen images with the consolidated nnDetection models (i.e., cross-validation ensemble) you can use the following command:
docker run --cpus=8 --memory=32gb --shm-size=32gb --gpus='"device=0"' -it --rm \
-v /path/to/workdir:/workdir \
-v /path/to/images:/input/images \
joeranbosma/picai_nndetection:latest nndet predict Task2201_picai_baseline RetinaUNetV001_D3V001_3d /workdir \
--fold -1 --check --resume --input /input/images --output /workdir/predictions/ --results /workdir/results/nnDet
For cross-validation with predictions from nndet consolidate
, generate detection maps for each fold. We provide a simple script for this, which transforms bounding boxes into cubes with the corresponding lesion confidence. All bounding boxes that overlap with another bounding box of higher confidence are discarded, to conform with the non-touching lesion candidates required by the PI-CAI Challenge.
Note: this is by no means the best strategy to transform bounding boxes to detection maps. We leave it to participants to improve on this translation step, e.g. by using spheres instead of cubes.
To convert boxes to detection maps with Docker:
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python /opt/code/nndet_generate_detection_maps.py --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold0/val_predictions
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python /opt/code/nndet_generate_detection_maps.py --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold1/val_predictions
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python /opt/code/nndet_generate_detection_maps.py --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold2/val_predictions
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python /opt/code/nndet_generate_detection_maps.py --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold3/val_predictions
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python /opt/code/nndet_generate_detection_maps.py --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold4/val_predictions
To evaluate, we can use the picai_eval
repository, see here for documentation. For evaluation from Python, you can adapt the script given in nnU-Net - Evaluation (you can remove the lesion exaction, but this is not necessary, as lesions extraction from a detection map gives the detection map itself).
For evaluation from the command line/Docker, we advice to save the subject lists to disk first, so you can ensure no cases are missing. Then, you can evaluate the detection maps using the following commands:
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python -m picai_eval --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold0/val_predictions_detection_maps --labels /workdir/nnUNet_raw_data/Task2201_picai_baseline/labelsTr/ --subject_list /workdir/splits/picai_nnunet/ds-config-valid-fold-0.json
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python -m picai_eval --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold1/val_predictions_detection_maps --labels /workdir/nnUNet_raw_data/Task2201_picai_baseline/labelsTr/ --subject_list /workdir/splits/picai_nnunet/ds-config-valid-fold-1.json
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python -m picai_eval --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold2/val_predictions_detection_maps --labels /workdir/nnUNet_raw_data/Task2201_picai_baseline/labelsTr/ --subject_list /workdir/splits/picai_nnunet/ds-config-valid-fold-2.json
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python -m picai_eval --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold3/val_predictions_detection_maps --labels /workdir/nnUNet_raw_data/Task2201_picai_baseline/labelsTr/ --subject_list /workdir/splits/picai_nnunet/ds-config-valid-fold-3.json
docker run --cpus=4 --memory=32gb --rm -v /path/to/workdir:/workdir/ joeranbosma/picai_nndetection:latest python -m picai_eval --input /workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/fold4/val_predictions_detection_maps --labels /workdir/nnUNet_raw_data/Task2201_picai_baseline/labelsTr/ --subject_list /workdir/splits/picai_nnunet/ds-config-valid-fold-4.json
The metrics will be displayed in the command line and stored to metrics.json
(inside the --input
directory). To load the metrics for subsequent analysis, we recommend loading the metrics using picai_eval
, this allows on-the-fly calculation of metrics (described in more detail here).
Once training is complete, you are ready to make an algorithm submission. Please read about Submission of Inference Containers to the Open Development Phase first. The grand-challenge algorithm submission template for this algorithm can be found here.
To deploy your own nnDetection algorithm, the trained models need to be transferred. Inference with nnDetection requires the following files (for the task name and trainer specified above):
~/workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated
├── config.yaml
├── model_fold0.ckpt
├── model_fold1.ckpt
├── model_fold2.ckpt
├── model_fold3.ckpt
├── model_fold4.ckpt
└── plan_inference.pkl
As well as:
~/workdir/nnDet_raw_data/Task2201_picai_baseline/dataset.json
These files can be collected through the command-line as follows:
mkdir -p /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated
cp /path/to/workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/config.yaml /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/config.yaml
cp /path/to/workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold0.ckpt /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold0.ckpt
cp /path/to/workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold1.ckpt /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold1.ckpt
cp /path/to/workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold2.ckpt /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold2.ckpt
cp /path/to/workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold3.ckpt /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold3.ckpt
cp /path/to/workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold4.ckpt /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/model_fold4.ckpt
cp /path/to/workdir/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/plan_inference.pkl /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/RetinaUNetV001_D3V001_3d/consolidated/plan_inference.pkl
cp /path/to/workdir/nnDet_raw_data/Task2201_picai_baseline/dataset.json /path/to/repos/picai_nndetection_gc_algorithm/results/nnDet/Task2201_picai_baseline/dataset.json
After collecting these files, please continue with the instructions provided in Submission of Inference Containers to the Open Development Phase.
The semi-supervised nnDetection model is trained in a very similar manner as the supervised nnDetection model. To train the semi-supervised model, prepare the dataset using prepare_data_semi_supervised.py
. See Data Preprocessing for details. Then, follow the steps above, replacing Task2201_picai_baseline
with Task2203_picai_baseline
.
[1] Fabian Isensee, Paul F. Jaeger, Simon A. A. Kohl, Jens Petersen and Klaus H. Maier-Hein. "nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation". Nature Methods 18.2 (2021): 203-211.
[2] Michael Baumgartner, Paul F. Jaeger, Fabian Isensee, Klaus H. Maier-Hein. "nnDetection: A Self-configuring Method for Medical Object Detection". International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2021.
[3] Joeran Bosma, Anindo Saha, Matin Hosseinzadeh, Ilse Slootweg, Maarten de Rooij, Henkjan Huisman. "Semi-supervised learning with report-guided lesion annotation for deep learning-based prostate cancer detection in bpMRI". arXiv:2112.05151.
[4] Joeran Bosma, Natalia Alves and Henkjan Huisman. "Performant and Reproducible Deep Learning-Based Cancer Detection Models for Medical Imaging". Under Review.