Segmentation on ADE20K is implemented based on MMSegmentation.
Model | mIoU | Latency | Ckpt | Log |
---|---|---|---|---|
RepViT-M1.1 | 40.6 | 4.9ms | M1.1 | M1.1 |
RepViT-M1.5 | 43.6 | 6.4ms | M1.5 | M1.5 |
RepViT-M2.3 | 46.1 | 9.9ms | M2.3 | M2.3 |
The backbone latency is measured with image crops of 512x512 on iPhone 12 by Core ML Tools.
Install mmcv-full and MMSegmentation v0.30.0. Later versions should work as well. The easiest way is to install via MIM
pip install -U openmim
mim install mmcv-full==1.7.1
mim install mmseg==0.30.0
We benchmark RepViT on the challenging ADE20K dataset, which can be downloaded and prepared following insructions in MMSeg. The data should appear as:
├── segmentation
│ ├── data
│ │ ├── ade
│ │ │ ├── ADEChallengeData2016
│ │ │ │ ├── annotations
│ │ │ │ │ ├── training
│ │ │ │ │ ├── validation
│ │ │ │ ├── images
│ │ │ │ │ ├── training
│ │ │ │ │ ├── validation
We provide a multi-GPU testing script, specify config file, checkpoint, and number of GPUs to use:
./tools/dist_test.sh config_file path/to/checkpoint #GPUs --eval mIoU
For example, to test RepViT-M1.1 on ADE20K on an 8-GPU machine,
./tools/dist_test.sh configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py path/to/repvit_m1_1_ade20k.pth 8 --eval mIoU
Download ImageNet-1K pretrained weights into ./pretrain
We provide PyTorch distributed data parallel (DDP) training script dist_train.sh
, for example, to train RepViT-M1.1 on an 8-GPU machine:
./tools/dist_train.sh configs/sem_fpn/fpn_repvit_m1_1_ade20k_40k.py 8
Tips: specify configs and #GPUs!