Releases: DIAGNijmegen/picai_baseline
Releases · DIAGNijmegen/picai_baseline
Version 0.8.2
Changes w.r.t. Version 0.7.1:
Baseline algorithms:
- Crop very large cases for nnU-Net and nnDetection training
- Allow skipping validation inference
- UNet training flexibility
- UNet training Docker
- Combine labels in workdir
- Flexibility splits specification
- Bugfix additional training arguments
Evaluation:
Sagemaker:
- Restructure training on SageMaker: (#32)
- Split preprocessing and training jobs
- Add bash scripts to test preprocessing and training locally
- SageMaker training pipeline for semi-supervised nnU-Net
- Additional documentation SageMaker
- SageMaker training update
- Preprocessing cleanup
Preprocessing:
- Preprocessing flexibility (#34):
- Convert preprocessing scripts (for supervised and semi-supervised learning) to importable functions.
- Allow preprocessing of the data using the command line interface.
- Allow any split when resampling annotations (#37)
Documentation:
Bugfixes:
Version 0.7.1
-
SageMaker training - part 1 (#18)
- Training scripts for SSL nnU-Net on SageMaker
- Improve the robustness of train.py to make debugging easier
- Install dependencies of nnU-Net in SageMaker's requirements.txt
- Install modified nnU-Net without requirements, to prevent requiring internet access
- Convert U-Net
plan_overview
to a function - Training scripts for SSL U-Net on SageMaker
- Add cross-validation splits with 10 cases for debugging
- Add missing U-Net dependency
- Training scripts for SSL nnU-Net on SageMaker
-
Sagemaker training part2 (#26)
- Data preprocessing: Improved specification of preprocessing settings,
and logging of unexpected parameters - Improved training in distributed environments (such as SageMaker)
- nnU-Net: improved preprocessing performance with
nnUNet_tl
- nnU-Net: preprocess scans to a maximum physical size of 81 x 192 x 192
mm - U-Net: improve robustness of preprocessing, by skipping cases with
label interpolation error - U-Net: add missing dependency
- Cross-validation splits: add PI-CAI PubPrivTrain, unit-tests, and
bugfix when using multiple at once
- Data preprocessing: Improved specification of preprocessing settings,
-
Performance optimization
-
PI-CAI PubPrivTrain cross-validation splits (#21)
-
Improved logging preprocessing script
-
Improve input specification preprocessing scripts
- Location of the dataset and working direction can now be specified in three ways:
- Using command line argument (i.e.,
--workdir=...
) - Using environment variable (i.e.
ENV workdir=...
, for example in Dockerfile/docker run command/Python) - Mount folders to the default location (i.e., with
-v
flag indocker run
)
- Using command line argument (i.e.,
- Images and labels can now be located at separate locations.
- Location of the dataset and working direction can now be specified in three ways:
-
Update documentation
-
Fix for nnU-Net inference softmax (#12)
- Fix how nnUNet softmax predictions are converted from their .npz format of the cropped image to the physical extent of the original image
- Evaluate while cropping away exterior predictions
- Single configurable nnU-Net evaluation script
-
Bugfixes
-
Cleanup
- Encapsulate plan_overview in a function
- Command line options for preprocessing settings (#16)
Version 0.1
Initial release.