This repository contains the implementation of a self-configuring and adapting vision transformer designed for the segmentation of 3D images.
- Anaconda (for managing the environment)
-
Clone the repository:
git clone https://github.com/Anne-Andresen/3D-Vision-transformer.git cd Former
-
Create and activate the conda environment:
conda env create -f environment.yml source activate Former
-
Create and activate the conda environment:
pip install -e .
To train the model, follow these steps:
- Convert the dataset:
nnFormer_convert_decathlon_task -i ../DATASET/Former/Former_data/Task01_OAR
- Plan and preprocess the dataset:
nnFormer_plan_and_preprocess -t 1
- Train the model:
nnFormer_train DATASET_NAME_OR_ID 3d_lowres FOLD [--npz]
- Example for DATASET_NAME_OR_ID:
nnFormer_train -t 1
- Example for FOLD values: [0, 1, 2, 3, 4]
For any questions or support, you can reach me via email at:
[email protected] [email protected]
Update README
Note: This README file is a work in progress and will be updated as the project evolves.