Skip to content

YukunLi99/AdaptSAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AdaptSAM: Adapt Segment Anything Model Based on Detectron2

AdaptSAM is a library built on top of Detectron2 that allows adapting the Segment-Anything Model (SAM) for custom COCO-format datasets. It supports point prompt training and leverages LORA technology for customizable adaptations of SAM.

Getting Started with SAM Detectron2

Command line-based Training & Evaluation

We provide a script tools/lazyconfig_train_net.py that trains all configurations of SAM.

To train a model with tools/lazyconfig_train_net.py, first prepare the datasets following the instructions in datasets/README.md and then run, for single-node (8-GPUs) NVIDIA-based training:

(node0)$ python ./tools/lazyconfig_train_net.py --config-file configs/finetune/finetune_lora_sam_coco.py --num-gpus 8 

To evaluate a trained SAM model's performance, run on single node

(node0)$ python ./tools/lazyconfig_train_net.py --config-file configs/finetune/finetune_lora_sam_coco.py --num-gpus 8 --eval-only --init-from /path/to/checkpoint

To evaluate a original SAM model's performance, run on single node

(node0)$ python ./tools/lazyconfig_train_net.py --config-file configs/eval/eval_sam_coco.py --num-gpus 4 --eval-only

Installation

Our environment requirements are consistent with ODISE, for installation, please refer to ODISE

Install dependencies by running:

conda create -n adaptsam python=3.9
conda activate adaptsam
conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev
pip install -U opencv-python

# under your working directory
git clone https://github.com/facebookresearch/detectron2.git
cd detectron2
pip install -e .
pip install git+https://github.com/cocodataset/panopticapi.git
pip install git+https://github.com/mcordts/cityscapesScripts.git

cd ..
git clone [email protected]:YukunLi99/AdaptSAM.git
cd AdaptSAM
pip install -e .

Features

  • Compatible with all features of Detectron2 framework
  • Supports custom COCO-format datasets
  • Supports point prompt training
  • Supports LORA technology

Prepare Datasets

Dataset preparation for AdaptSAM follows Detectron2 and Mask2Former.

For Pascal VOC Instance Segmentation, run the following command to generate coco-format json file (refer to voc2coco):

$ python voc2coco.py \
    --ann_dir /path/to/annotation/dir \
    --ann_ids /path/to/annotations/ids/list.txt \
    --labels /path/to/labels.txt \
    --output /path/to/output.json \

Results

method datasets $AP_{100}$ $AR_{100}$ $AR_{s}$ $AR_{m}$ $AR_{l}$
SAM-Base voc 1.3 34 33.8 28.4 37.5
AdaptSAM-Base voc 5.8 52 22.6 38.1 65.8
SAM-Huge coco 1.8 32.7 23.5 37.8 41.8
AdaptSAM-Huge coco 10.2 49.7 28.4 59.5 73.2

Acknowledgement

Code is largely based on Detectron2, SAM, Mask2Former, ODISE

License

This project is licensed same as SAM model.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages