From 3a6a49d186606332b19788ee16ab64ff37d59e46 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Haian=20Huang=28=E6=B7=B1=E5=BA=A6=E7=9C=B8=29?=
<1286304229@qq.com>
Date: Thu, 6 Apr 2023 11:03:21 +0800
Subject: [PATCH] Update 3.x link to main (#10079)
---
.circleci/test.yml | 13 +-
README.md | 67 +++++----
README_zh-CN.md | 65 ++++-----
configs/fast_rcnn/README.md | 2 +-
configs/objects365/README.md | 20 +--
configs/queryinst/metafile.yml | 2 +-
configs/rtmdet/README.md | 22 +--
configs/rtmdet/classification/README.md | 2 +-
configs/scnet/README.md | 2 +-
docker/Dockerfile | 4 +-
docker/serve/Dockerfile | 2 +-
docker/serve_cn/Dockerfile | 2 +-
docs/en/advanced_guides/how_to.md | 2 +-
docs/en/conf.py | 4 +-
docs/en/get_started.md | 7 +-
docs/en/model_zoo.md | 136 +++++++++---------
docs/en/notes/changelog.md | 46 ++++++
docs/en/notes/changelog_v2.x.md | 2 +-
docs/en/notes/compatibility.md | 2 +-
docs/en/notes/faq.md | 33 ++---
docs/en/overview.md | 8 +-
docs/en/stat.py | 2 +-
docs/en/user_guides/config.md | 30 ++--
docs/en/user_guides/deploy.md | 4 +-
docs/en/user_guides/inference.md | 8 +-
docs/en/user_guides/label_studio.md | 2 +-
docs/en/user_guides/test.md | 4 +-
docs/en/user_guides/train.md | 4 +-
docs/en/user_guides/useful_hooks.md | 2 +-
docs/en/user_guides/useful_tools.md | 2 +-
.../advanced_guides/customize_dataset.md | 6 +-
.../zh_cn/advanced_guides/customize_losses.md | 4 +-
.../advanced_guides/customize_runtime.md | 4 +-
docs/zh_cn/advanced_guides/how_to.md | 4 +-
docs/zh_cn/conf.py | 2 +-
docs/zh_cn/get_started.md | 7 +-
docs/zh_cn/model_zoo.md | 124 ++++++++--------
docs/zh_cn/notes/faq.md | 5 +-
docs/zh_cn/overview.md | 8 +-
docs/zh_cn/stat.py | 2 +-
docs/zh_cn/user_guides/config.md | 32 ++---
docs/zh_cn/user_guides/dataset_prepare.md | 2 +-
docs/zh_cn/user_guides/deploy.md | 4 +-
docs/zh_cn/user_guides/inference.md | 8 +-
docs/zh_cn/user_guides/label_studio.md | 2 +-
docs/zh_cn/user_guides/test.md | 6 +-
docs/zh_cn/user_guides/train.md | 6 +-
docs/zh_cn/user_guides/useful_hooks.md | 2 +-
docs/zh_cn/user_guides/useful_tools.md | 2 +-
mmdet/__init__.py | 2 +-
mmdet/datasets/base_det_dataset.py | 2 +-
mmdet/datasets/transforms/loading.py | 2 +-
mmdet/evaluation/metrics/cityscapes_metric.py | 2 +-
mmdet/evaluation/metrics/coco_metric.py | 2 +-
.../metrics/coco_panoptic_metric.py | 2 +-
mmdet/evaluation/metrics/crowdhuman_metric.py | 2 +-
.../metrics/dump_proposals_metric.py | 2 +-
mmdet/evaluation/metrics/lvis_metric.py | 2 +-
mmdet/version.py | 2 +-
projects/Detic/README.md | 4 +-
projects/DiffusionDet/README.md | 6 +-
projects/EfficientDet/README.md | 6 +-
projects/SparseInst/README.md | 6 +-
projects/example_project/README.md | 8 +-
requirements/mminstall.txt | 2 +-
requirements/readthedocs.txt | 4 +-
66 files changed, 417 insertions(+), 370 deletions(-)
diff --git a/.circleci/test.yml b/.circleci/test.yml
index 994d4b94e01..f98014366dd 100644
--- a/.circleci/test.yml
+++ b/.circleci/test.yml
@@ -91,7 +91,7 @@ jobs:
type: string
cuda:
type: enum
- enum: ["10.1", "10.2", "11.1"]
+ enum: ["10.1", "10.2", "11.1", "11.7"]
cudnn:
type: integer
default: 7
@@ -161,8 +161,8 @@ workflows:
- lint
- build_cpu:
name: maximum_version_cpu
- torch: 1.13.0
- torchvision: 0.14.0
+ torch: 2.0.0
+ torchvision: 0.15.1
python: 3.9.0
requires:
- minimum_version_cpu
@@ -178,6 +178,13 @@ workflows:
cuda: "10.2"
requires:
- hold
+ - build_cuda:
+ name: maximum_version_gpu
+ torch: 2.0.0
+ cuda: "11.7"
+ cudnn: 8
+ requires:
+ - hold
merge_stage_test:
when:
not: << pipeline.parameters.lint_only >>
diff --git a/README.md b/README.md
index 07d9b920fa9..1718b0c868a 100644
--- a/README.md
+++ b/README.md
@@ -21,15 +21,15 @@
[](https://pypi.org/project/mmdet)
[](https://mmdetection.readthedocs.io/en/latest/)
[](https://github.com/open-mmlab/mmdetection/actions)
-[](https://codecov.io/gh/open-mmlab/mmdetection)
-[](https://github.com/open-mmlab/mmdetection/blob/master/LICENSE)
+[](https://codecov.io/gh/open-mmlab/mmdetection)
+[](https://github.com/open-mmlab/mmdetection/blob/main/LICENSE)
[](https://github.com/open-mmlab/mmdetection/issues)
[](https://github.com/open-mmlab/mmdetection/issues)
-[📘Documentation](https://mmdetection.readthedocs.io/en/3.x/) |
-[🛠️Installation](https://mmdetection.readthedocs.io/en/3.x/get_started.html) |
-[👀Model Zoo](https://mmdetection.readthedocs.io/en/3.x/model_zoo.html) |
-[🆕Update News](https://mmdetection.readthedocs.io/en/3.x/notes/changelog.html) |
+[📘Documentation](https://mmdetection.readthedocs.io/en/latest/) |
+[🛠️Installation](https://mmdetection.readthedocs.io/en/latest/get_started.html) |
+[👀Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html) |
+[🆕Update News](https://mmdetection.readthedocs.io/en/latest/notes/changelog.html) |
[🚀Ongoing Projects](https://github.com/open-mmlab/mmdetection/projects) |
[🤔Reporting Issues](https://github.com/open-mmlab/mmdetection/issues/new/choose)
@@ -66,7 +66,7 @@ English | [简体中文](README_zh-CN.md)
MMDetection is an open source object detection toolbox based on PyTorch. It is
a part of the [OpenMMLab](https://openmmlab.com/) project.
-The master branch works with **PyTorch 1.6+**.
+The main branch works with **PyTorch 1.6+**.
@@ -114,43 +114,40 @@ We are excited to announce our latest work on real-time object recognition tasks
-**v3.0.0rc6** was released in 24/2/2023:
+**v3.0.0** was released in 6/4/2023:
-- Support [Boxinst](configs/boxinst), [Objects365 Dataset](configs/objects365), and [Separated and Occluded COCO metric](docs/en/user_guides/useful_tools.md#coco-separated--occluded-mask-metric)
-- Support [ConvNeXt-V2](projects/ConvNeXt-V2), [DiffusionDet](projects/DiffusionDet), and inference of [EfficientDet](projects/EfficientDet) and [Detic](projects/Detic) in `Projects`
-- Refactor [DETR](configs/detr) series and support [Conditional-DETR](configs/conditional_detr), [DAB-DETR](configs/dab_detr), and [DINO](configs/dino)
-- Support `DetInferencer` for inference, Test Time Augmentation, and automatically importing modules from registry
-- Support RTMDet-Ins ONNXRuntime and TensorRT [deployment](configs/rtmdet/README.md#deployment-tutorial)
-- Support [calculating FLOPs of detectors](docs/en/user_guides/useful_tools.md#Model-Complexity)
+- Release MMDetection 3.0.0 official version
+- Support Semi-automatic annotation Base [Label-Studio](projects/LabelStudio) (#10039)
+- Support [EfficientDet](projects/EfficientDet) in projects (#9810)
## Installation
-Please refer to [Installation](https://mmdetection.readthedocs.io/en/3.x/get_started.html) for installation instructions.
+Please refer to [Installation](https://mmdetection.readthedocs.io/en/latest/get_started.html) for installation instructions.
## Getting Started
-Please see [Overview](https://mmdetection.readthedocs.io/en/3.x/get_started.html) for the general introduction of MMDetection.
+Please see [Overview](https://mmdetection.readthedocs.io/en/latest/get_started.html) for the general introduction of MMDetection.
-For detailed user guides and advanced guides, please refer to our [documentation](https://mmdetection.readthedocs.io/en/3.x/):
+For detailed user guides and advanced guides, please refer to our [documentation](https://mmdetection.readthedocs.io/en/latest/):
- User Guides
- - [Train & Test](https://mmdetection.readthedocs.io/en/3.x/user_guides/index.html#train-test)
- - [Learn about Configs](https://mmdetection.readthedocs.io/en/3.x/user_guides/config.html)
- - [Inference with existing models](https://mmdetection.readthedocs.io/en/3.x/user_guides/inference.html)
- - [Dataset Prepare](https://mmdetection.readthedocs.io/en/3.x/user_guides/dataset_prepare.html)
- - [Test existing models on standard datasets](https://mmdetection.readthedocs.io/en/3.x/user_guides/test.html)
- - [Train predefined models on standard datasets](https://mmdetection.readthedocs.io/en/3.x/user_guides/train.html)
- - [Train with customized datasets](https://mmdetection.readthedocs.io/en/3.x/user_guides/train.html#train-with-customized-datasets)
- - [Train with customized models and standard datasets](https://mmdetection.readthedocs.io/en/3.x/user_guides/new_model.html)
- - [Finetuning Models](https://mmdetection.readthedocs.io/en/3.x/user_guides/finetune.html)
- - [Test Results Submission](https://mmdetection.readthedocs.io/en/3.x/user_guides/test_results_submission.html)
- - [Weight initialization](https://mmdetection.readthedocs.io/en/3.x/user_guides/init_cfg.html)
- - [Use a single stage detector as RPN](https://mmdetection.readthedocs.io/en/3.x/user_guides/single_stage_as_rpn.html)
- - [Semi-supervised Object Detection](https://mmdetection.readthedocs.io/en/3.x/user_guides/semi_det.html)
- - [Useful Tools](https://mmdetection.readthedocs.io/en/3.x/user_guides/index.html#useful-tools)
+ - [Train & Test](https://mmdetection.readthedocs.io/en/latest/user_guides/index.html#train-test)
+ - [Learn about Configs](https://mmdetection.readthedocs.io/en/latest/user_guides/config.html)
+ - [Inference with existing models](https://mmdetection.readthedocs.io/en/latest/user_guides/inference.html)
+ - [Dataset Prepare](https://mmdetection.readthedocs.io/en/latest/user_guides/dataset_prepare.html)
+ - [Test existing models on standard datasets](https://mmdetection.readthedocs.io/en/latest/user_guides/test.html)
+ - [Train predefined models on standard datasets](https://mmdetection.readthedocs.io/en/latest/user_guides/train.html)
+ - [Train with customized datasets](https://mmdetection.readthedocs.io/en/latest/user_guides/train.html#train-with-customized-datasets)
+ - [Train with customized models and standard datasets](https://mmdetection.readthedocs.io/en/latest/user_guides/new_model.html)
+ - [Finetuning Models](https://mmdetection.readthedocs.io/en/latest/user_guides/finetune.html)
+ - [Test Results Submission](https://mmdetection.readthedocs.io/en/latest/user_guides/test_results_submission.html)
+ - [Weight initialization](https://mmdetection.readthedocs.io/en/latest/user_guides/init_cfg.html)
+ - [Use a single stage detector as RPN](https://mmdetection.readthedocs.io/en/latest/user_guides/single_stage_as_rpn.html)
+ - [Semi-supervised Object Detection](https://mmdetection.readthedocs.io/en/latest/user_guides/semi_det.html)
+ - [Useful Tools](https://mmdetection.readthedocs.io/en/latest/user_guides/index.html#useful-tools)
@@ -158,15 +155,15 @@ For detailed user guides and advanced guides, please refer to our [documentation
- - [Basic Concepts](https://mmdetection.readthedocs.io/en/3.x/advanced_guides/index.html#basic-concepts)
- - [Component Customization](https://mmdetection.readthedocs.io/en/3.x/advanced_guides/index.html#component-customization)
- - [How to](https://mmdetection.readthedocs.io/en/3.x/advanced_guides/index.html#how-to)
+ - [Basic Concepts](https://mmdetection.readthedocs.io/en/latest/advanced_guides/index.html#basic-concepts)
+ - [Component Customization](https://mmdetection.readthedocs.io/en/latest/advanced_guides/index.html#component-customization)
+ - [How to](https://mmdetection.readthedocs.io/en/latest/advanced_guides/index.html#how-to)
We also provide object detection colab tutorial [](demo/MMDet_Tutorial.ipynb) and instance segmentation colab tutorial [](demo/MMDet_InstanceSeg_Tutorial.ipynb).
-To migrate from MMDetection 2.x, please refer to [migration](https://mmdetection.readthedocs.io/en/3.x/migration.html).
+To migrate from MMDetection 2.x, please refer to [migration](https://mmdetection.readthedocs.io/en/latest/migration.html).
## Overview of Benchmark and Model Zoo
diff --git a/README_zh-CN.md b/README_zh-CN.md
index 7c345369ce5..80392acd69f 100644
--- a/README_zh-CN.md
+++ b/README_zh-CN.md
@@ -21,15 +21,15 @@
[](https://pypi.org/project/mmdet)
[](https://mmdetection.readthedocs.io/en/latest/)
[](https://github.com/open-mmlab/mmdetection/actions)
-[](https://codecov.io/gh/open-mmlab/mmdetection)
-[](https://github.com/open-mmlab/mmdetection/blob/master/LICENSE)
+[](https://codecov.io/gh/open-mmlab/mmdetection)
+[](https://github.com/open-mmlab/mmdetection/blob/main/LICENSE)
[](https://github.com/open-mmlab/mmdetection/issues)
[](https://github.com/open-mmlab/mmdetection/issues)
-[📘使用文档](https://mmdetection.readthedocs.io/zh_CN/3.x/) |
-[🛠️安装教程](https://mmdetection.readthedocs.io/zh_CN/3.x/get_started.html) |
-[👀模型库](https://mmdetection.readthedocs.io/zh_CN/3.x/model_zoo.html) |
-[🆕更新日志](https://mmdetection.readthedocs.io/en/3.x/notes/changelog.html) |
+[📘使用文档](https://mmdetection.readthedocs.io/zh_CN/latest/) |
+[🛠️安装教程](https://mmdetection.readthedocs.io/zh_CN/latest/get_started.html) |
+[👀模型库](https://mmdetection.readthedocs.io/zh_CN/latest/model_zoo.html) |
+[🆕更新日志](https://mmdetection.readthedocs.io/en/latest/notes/changelog.html) |
[🚀进行中的项目](https://github.com/open-mmlab/mmdetection/projects) |
[🤔报告问题](https://github.com/open-mmlab/mmdetection/issues/new/choose)
@@ -113,43 +113,40 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
-**v3.0.0rc6** 版本已经在 2023.2.24 发布:
+**v3.0.0** 版本已经在 2023.4.6 发布:
-- 支持了 [Boxinst](configs/boxinst), [Objects365 Dataset](configs/objects365) 和 [Separated and Occluded COCO metric](docs/zh_cn/user_guides/useful_tools.md#coco-分离和遮挡实例分割性能评估)
-- 在 `Projects` 中支持了 [ConvNeXt-V2](projects/ConvNeXt-V2), [DiffusionDet](projects/DiffusionDet) 和 [EfficientDet](projects/EfficientDet), [Detic](projects/Detic) 的推理
-- 重构了 [DETR](configs/detr) 系列并支持了 [Conditional-DETR](configs/conditional_detr), [DAB-DETR](configs/dab_detr) 和 [DINO](configs/dino)
-- 支持了通过 `DetInferencer` 用于推理, Test Time Augmentation 以及从注册表(registry)自动导入模块
-- 支持了 RTMDet-Ins 的 ONNXRuntime 和 TensorRT [部署](configs/rtmdet/README.md#deployment-tutorial)
-- 支持了检测器[计算 FLOPS](docs/zh_cn/user_guides/useful_tools.md#模型复杂度)
+- 发布 MMDetection 3.0.0 正式版
+- 基于 [Label-Studio](projects/LabelStudio) 支持半自动标注流程
+- projects 中支持了 [EfficientDet](projects/EfficientDet)
## 安装
-请参考[快速入门文档](https://mmdetection.readthedocs.io/zh_CN/3.x/get_started.html)进行安装。
+请参考[快速入门文档](https://mmdetection.readthedocs.io/zh_CN/latest/get_started.html)进行安装。
## 教程
-请阅读[概述](https://mmdetection.readthedocs.io/zh_CN/3.x/get_started.html)对 MMDetection 进行初步的了解。
+请阅读[概述](https://mmdetection.readthedocs.io/zh_CN/latest/get_started.html)对 MMDetection 进行初步的了解。
-为了帮助用户更进一步了解 MMDetection,我们准备了用户指南和进阶指南,请阅读我们的[文档](https://mmdetection.readthedocs.io/zh_CN/3.x/):
+为了帮助用户更进一步了解 MMDetection,我们准备了用户指南和进阶指南,请阅读我们的[文档](https://mmdetection.readthedocs.io/zh_CN/latest/):
- 用户指南
- - [训练 & 测试](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/index.html#train-test)
- - [学习配置文件](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/config.html)
- - [使用已有模型在标准数据集上进行推理](https://mmdetection.readthedocs.io/en/3.x/user_guides/inference.html)
- - [数据集准备](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/dataset_prepare.html)
- - [测试现有模型](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/test.html)
- - [在标准数据集上训练预定义的模型](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/train.html)
- - [在自定义数据集上进行训练](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/train.html#train-with-customized-datasets)
- - [在标准数据集上训练自定义模型](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/new_model.html)
- - [模型微调](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/finetune.html)
- - [提交测试结果](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/test_results_submission.html)
- - [权重初始化](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/init_cfg.html)
- - [将单阶段检测器作为 RPN](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/single_stage_as_rpn.html)
- - [半监督目标检测](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/semi_det.html)
- - [实用工具](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/index.html#useful-tools)
+ - [训练 & 测试](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/index.html#train-test)
+ - [学习配置文件](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/config.html)
+ - [使用已有模型在标准数据集上进行推理](https://mmdetection.readthedocs.io/en/latest/user_guides/inference.html)
+ - [数据集准备](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/dataset_prepare.html)
+ - [测试现有模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/test.html)
+ - [在标准数据集上训练预定义的模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/train.html)
+ - [在自定义数据集上进行训练](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/train.html#train-with-customized-datasets)
+ - [在标准数据集上训练自定义模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/new_model.html)
+ - [模型微调](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/finetune.html)
+ - [提交测试结果](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/test_results_submission.html)
+ - [权重初始化](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/init_cfg.html)
+ - [将单阶段检测器作为 RPN](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/single_stage_as_rpn.html)
+ - [半监督目标检测](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/semi_det.html)
+ - [实用工具](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/index.html#useful-tools)
@@ -157,9 +154,9 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
- - [基础概念](https://mmdetection.readthedocs.io/zh_CN/3.x/advanced_guides/index.html#basic-concepts)
- - [组件定制](https://mmdetection.readthedocs.io/zh_CN/3.x/advanced_guides/index.html#component-customization)
- - [How to](https://mmdetection.readthedocs.io/zh_CN/3.x/advanced_guides/index.html#how-to)
+ - [基础概念](https://mmdetection.readthedocs.io/zh_CN/latest/advanced_guides/index.html#basic-concepts)
+ - [组件定制](https://mmdetection.readthedocs.io/zh_CN/latest/advanced_guides/index.html#component-customization)
+ - [How to](https://mmdetection.readthedocs.io/zh_CN/latest/advanced_guides/index.html#how-to)
@@ -167,7 +164,7 @@ MMDetection 是一个基于 PyTorch 的目标检测开源工具箱。它是 [Ope
同时,我们还提供了 [MMDetection 中文解读文案汇总](docs/zh_cn/article.md)
-若需要将2.x版本的代码迁移至新版,请参考[迁移文档](https://mmdetection.readthedocs.io/en/3.x/migration.html)。
+若需要将2.x版本的代码迁移至新版,请参考[迁移文档](https://mmdetection.readthedocs.io/en/latest/migration.html)。
## 基准测试和模型库
diff --git a/configs/fast_rcnn/README.md b/configs/fast_rcnn/README.md
index cd582ec8c6f..0bdc9359c7c 100644
--- a/configs/fast_rcnn/README.md
+++ b/configs/fast_rcnn/README.md
@@ -59,7 +59,7 @@ The `pred_instance` is an `InstanceData` containing the sorted boxes and scores
8
```
- Users can refer to [test tutorial](https://mmdetection.readthedocs.io/en/3.x/user_guides/test.html) for more details.
+ Users can refer to [test tutorial](https://mmdetection.readthedocs.io/en/latest/user_guides/test.html) for more details.
- Then, modify the path of `proposal_file` in the dataset and using `ProposalBroadcaster` to process both ground truth bounding boxes and region proposals in pipelines.
An example of Fast R-CNN important setting can be seen as below:
diff --git a/configs/objects365/README.md b/configs/objects365/README.md
index e54928b649a..fca0dbfc945 100644
--- a/configs/objects365/README.md
+++ b/configs/objects365/README.md
@@ -87,16 +87,16 @@ Objects 365 includes 11 categories of people, clothing, living room, bathroom, k
### Objects365 V1
-| Architecture | Backbone | Style | Lr schd | Mem (GB) | box AP | Config | Download |
-| :----------: | :------: | :-----: | :-----: | :------: | :----: | :------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
-| Faster R-CNN | R-50 | pytorch | 1x | - | 19.6 | [config](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/objects365/faster-rcnn_r50_fpn_16xb4-1x_objects365v1.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v1/faster_rcnn_r50_fpn_16x4_1x_obj365v1_20221219_181226-9ff10f95.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v1/faster_rcnn_r50_fpn_16x4_1x_obj365v1_20221219_181226.log.json) |
-| Faster R-CNN | R-50 | pytorch | 1350K | - | 22.3 | [config](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/objects365/faster-rcnn_r50-syncbn_fpn_1350k_objects365v1.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1_20220510_142457-337d8965.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1_20220510_142457.log.json) |
-| Retinanet | R-50 | pytorch | 1x | - | 14.8 | [config](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/objects365/retinanet_r50_fpn_1x_objects365v1.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_1x_obj365v1/retinanet_r50_fpn_1x_obj365v1_20221219_181859-ba3e3dd5.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_1x_obj365v1/retinanet_r50_fpn_1x_obj365v1_20221219_181859.log.json) |
-| Retinanet | R-50 | pytorch | 1350K | - | 18.0 | [config](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/objects365/retinanet_r50-syncbn_fpn_1350k_objects365v1.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_syncbn_1350k_obj365v1/retinanet_r50_fpn_syncbn_1350k_obj365v1_20220513_111237-7517c576.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_syncbn_1350k_obj365v1/retinanet_r50_fpn_syncbn_1350k_obj365v1_20220513_111237.log.json) |
+| Architecture | Backbone | Style | Lr schd | Mem (GB) | box AP | Config | Download |
+| :----------: | :------: | :-----: | :-----: | :------: | :----: | :-------------------------------------------------------------------------------------------------------------------------------: | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+| Faster R-CNN | R-50 | pytorch | 1x | - | 19.6 | [config](https://github.com/open-mmlab/mmdetection/tree/main/configs/objects365/faster-rcnn_r50_fpn_16xb4-1x_objects365v1.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v1/faster_rcnn_r50_fpn_16x4_1x_obj365v1_20221219_181226-9ff10f95.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v1/faster_rcnn_r50_fpn_16x4_1x_obj365v1_20221219_181226.log.json) |
+| Faster R-CNN | R-50 | pytorch | 1350K | - | 22.3 | [config](https://github.com/open-mmlab/mmdetection/tree/main/configs/objects365/faster-rcnn_r50-syncbn_fpn_1350k_objects365v1.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1_20220510_142457-337d8965.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1/faster_rcnn_r50_fpn_syncbn_1350k_obj365v1_20220510_142457.log.json) |
+| Retinanet | R-50 | pytorch | 1x | - | 14.8 | [config](https://github.com/open-mmlab/mmdetection/tree/main/configs/objects365/retinanet_r50_fpn_1x_objects365v1.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_1x_obj365v1/retinanet_r50_fpn_1x_obj365v1_20221219_181859-ba3e3dd5.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_1x_obj365v1/retinanet_r50_fpn_1x_obj365v1_20221219_181859.log.json) |
+| Retinanet | R-50 | pytorch | 1350K | - | 18.0 | [config](https://github.com/open-mmlab/mmdetection/tree/main/configs/objects365/retinanet_r50-syncbn_fpn_1350k_objects365v1.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_syncbn_1350k_obj365v1/retinanet_r50_fpn_syncbn_1350k_obj365v1_20220513_111237-7517c576.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_syncbn_1350k_obj365v1/retinanet_r50_fpn_syncbn_1350k_obj365v1_20220513_111237.log.json) |
### Objects365 V2
-| Architecture | Backbone | Style | Lr schd | Mem (GB) | box AP | Config | Download |
-| :----------: | :------: | :-----: | :-----: | :------: | :----: | :--------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
-| Faster R-CNN | R-50 | pytorch | 1x | - | 19.8 | [config](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/objects365/faster-rcnn_r50_fpn_16xb4-1x_objects365v2.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v2/faster_rcnn_r50_fpn_16x4_1x_obj365v2_20221220_175040-5910b015.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v2/faster_rcnn_r50_fpn_16x4_1x_obj365v2_20221220_175040.log.json) |
-| Retinanet | R-50 | pytorch | 1x | - | 16.7 | [config](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/objects365/retinanet_r50_fpn_1x_objects365v2.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_1x_obj365v2/retinanet_r50_fpn_1x_obj365v2_20221223_122105-d9b191f1.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_1x_obj365v2/retinanet_r50_fpn_1x_obj365v2_20221223_122105.log.json) |
+| Architecture | Backbone | Style | Lr schd | Mem (GB) | box AP | Config | Download |
+| :----------: | :------: | :-----: | :-----: | :------: | :----: | :---------------------------------------------------------------------------------------------------------------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------: |
+| Faster R-CNN | R-50 | pytorch | 1x | - | 19.8 | [config](https://github.com/open-mmlab/mmdetection/tree/main/configs/objects365/faster-rcnn_r50_fpn_16xb4-1x_objects365v2.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v2/faster_rcnn_r50_fpn_16x4_1x_obj365v2_20221220_175040-5910b015.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/faster_rcnn_r50_fpn_16x4_1x_obj365v2/faster_rcnn_r50_fpn_16x4_1x_obj365v2_20221220_175040.log.json) |
+| Retinanet | R-50 | pytorch | 1x | - | 16.7 | [config](https://github.com/open-mmlab/mmdetection/tree/main/configs/objects365/retinanet_r50_fpn_1x_objects365v2.py) | [model](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_1x_obj365v2/retinanet_r50_fpn_1x_obj365v2_20221223_122105-d9b191f1.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/objects365/retinanet_r50_fpn_1x_obj365v2/retinanet_r50_fpn_1x_obj365v2_20221223_122105.log.json) |
diff --git a/configs/queryinst/metafile.yml b/configs/queryinst/metafile.yml
index 07c3d035d59..3ea3b00a945 100644
--- a/configs/queryinst/metafile.yml
+++ b/configs/queryinst/metafile.yml
@@ -15,7 +15,7 @@ Collections:
Title: 'Instances as Queries'
README: configs/queryinst/README.md
Code:
- URL: https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/detectors/queryinst.py
+ URL: https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/detectors/queryinst.py
Version: v2.18.0
Models:
diff --git a/configs/rtmdet/README.md b/configs/rtmdet/README.md
index 593e7b607ac..02c95466cc7 100644
--- a/configs/rtmdet/README.md
+++ b/configs/rtmdet/README.md
@@ -115,9 +115,9 @@ Here is a basic example of deploy RTMDet with [MMDeploy-1.x](https://github.com/
### Step1. Install MMDeploy
-Before starting the deployment, please make sure you install MMDetection-3.x and MMDeploy-1.x correctly.
+Before starting the deployment, please make sure you install MMDetection and MMDeploy-1.x correctly.
-- Install MMDetection-3.x, please refer to the [MMDetection-3.x installation guide](https://mmdetection.readthedocs.io/en/3.x/get_started.html).
+- Install MMDetection, please refer to the [MMDetection installation guide](https://mmdetection.readthedocs.io/en/latest/get_started.html).
- Install MMDeploy-1.x, please refer to the [MMDeploy-1.x installation guide](https://mmdeploy.readthedocs.io/en/1.x/get_started.html#installation).
If you want to deploy RTMDet with ONNXRuntime, TensorRT, or other inference engine,
@@ -387,13 +387,13 @@ In MMDetection's config, we use `model` to set up detection algorithm components
model = dict(
type='RTMDet', # The name of detector
data_preprocessor=dict( # The config of data preprocessor, usually includes image normalization and padding
- type='DetDataPreprocessor', # The type of the data preprocessor. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.data_preprocessors.DetDataPreprocessor
+ type='DetDataPreprocessor', # The type of the data preprocessor. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.data_preprocessors.DetDataPreprocessor
mean=[103.53, 116.28, 123.675], # Mean values used to pre-training the pre-trained backbone models, ordered in R, G, B
std=[57.375, 57.12, 58.395], # Standard variance used to pre-training the pre-trained backbone models, ordered in R, G, B
bgr_to_rgb=False, # whether to convert image from BGR to RGB
batch_augments=None), # Batch-level augmentations
backbone=dict( # The config of backbone
- type='CSPNeXt', # The type of backbone network. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.backbones.CSPNeXt
+ type='CSPNeXt', # The type of backbone network. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.backbones.CSPNeXt
arch='P5', # Architecture of CSPNeXt, from {P5, P6}. Defaults to P5
expand_ratio=0.5, # Ratio to adjust the number of channels of the hidden layer. Defaults to 0.5
deepen_factor=1, # Depth multiplier, multiply number of blocks in CSP layer by this amount. Defaults to 1.0
@@ -402,7 +402,7 @@ model = dict(
norm_cfg=dict(type='SyncBN'), # Dictionary to construct and config norm layer. Defaults to dict(type=’BN’, requires_grad=True)
act_cfg=dict(type='SiLU', inplace=True)), # Config dict for activation layer. Defaults to dict(type=’SiLU’)
neck=dict(
- type='CSPNeXtPAFPN', # The type of neck is CSPNeXtPAFPN. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.necks.CSPNeXtPAFPN
+ type='CSPNeXtPAFPN', # The type of neck is CSPNeXtPAFPN. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.necks.CSPNeXtPAFPN
in_channels=[256, 512, 1024], # Number of input channels per scale
out_channels=256, # Number of output channels (used at each scale)
num_csp_blocks=3, # Number of bottlenecks in CSPLayer. Defaults to 3
@@ -410,23 +410,23 @@ model = dict(
norm_cfg=dict(type='SyncBN'), # Config dict for normalization layer. Default: dict(type=’BN’)
act_cfg=dict(type='SiLU', inplace=True)), # Config dict for activation layer. Default: dict(type=’Swish’)
bbox_head=dict(
- type='RTMDetSepBNHead', # The type of bbox_head is RTMDetSepBNHead. RTMDetHead with separated BN layers and shared conv layers. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.dense_heads.RTMDetSepBNHead
+ type='RTMDetSepBNHead', # The type of bbox_head is RTMDetSepBNHead. RTMDetHead with separated BN layers and shared conv layers. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.dense_heads.RTMDetSepBNHead
num_classes=80, # Number of categories excluding the background category
in_channels=256, # Number of channels in the input feature map
stacked_convs=2, # Whether to share conv layers between stages. Defaults to True
feat_channels=256, # Feature channels of convolutional layers in the head
anchor_generator=dict( # The config of anchor generator
- type='MlvlPointGenerator', # The methods use MlvlPointGenerator. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/prior_generators/point_generator.py#L92
+ type='MlvlPointGenerator', # The methods use MlvlPointGenerator. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/prior_generators/point_generator.py#L92
offset=0, # The offset of points, the value is normalized with corresponding stride. Defaults to 0.5
strides=[8, 16, 32]), # Strides of anchors in multiple feature levels in order (w, h)
- bbox_coder=dict(type='DistancePointBBoxCoder'), # Distance Point BBox coder.This coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left,right) and decode it back to the original. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/coders/distance_point_bbox_coder.py#L9
+ bbox_coder=dict(type='DistancePointBBoxCoder'), # Distance Point BBox coder.This coder encodes gt bboxes (x1, y1, x2, y2) into (top, bottom, left,right) and decode it back to the original. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/coders/distance_point_bbox_coder.py#L9
loss_cls=dict( # Config of loss function for the classification branch
- type='QualityFocalLoss', # Type of loss for classification branch. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.losses.QualityFocalLoss
+ type='QualityFocalLoss', # Type of loss for classification branch. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.losses.QualityFocalLoss
use_sigmoid=True, # Whether sigmoid operation is conducted in QFL. Defaults to True
beta=2.0, # The beta parameter for calculating the modulating factor. Defaults to 2.0
loss_weight=1.0), # Loss weight of current loss
loss_bbox=dict( # Config of loss function for the regression branch
- type='GIoULoss', # Type of loss. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.losses.GIoULoss
+ type='GIoULoss', # Type of loss. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.losses.GIoULoss
loss_weight=2.0), # Loss weight of the regression branch
with_objectness=False, # Whether to add an objectness branch. Defaults to True
exp_on_reg=True, # Whether to use .exp() in regression
@@ -436,7 +436,7 @@ model = dict(
act_cfg=dict(type='SiLU', inplace=True)), # Config dict for activation layer. Defaults to dict(type='SiLU')
train_cfg=dict( # Config of training hyperparameters for ATSS
assigner=dict( # Config of assigner
- type='DynamicSoftLabelAssigner', # Type of assigner. DynamicSoftLabelAssigner computes matching between predictions and ground truth with dynamic soft label assignment. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/assigners/dynamic_soft_label_assigner.py#L40
+ type='DynamicSoftLabelAssigner', # Type of assigner. DynamicSoftLabelAssigner computes matching between predictions and ground truth with dynamic soft label assignment. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/assigners/dynamic_soft_label_assigner.py#L40
topk=13), # Select top-k predictions to calculate dynamic k best matches for each gt. Defaults to 13
allowed_border=-1, # The border allowed after padding for valid anchors
pos_weight=-1, # The weight of positive samples during training
diff --git a/configs/rtmdet/classification/README.md b/configs/rtmdet/classification/README.md
index dbfef4c7249..6aee2c61794 100644
--- a/configs/rtmdet/classification/README.md
+++ b/configs/rtmdet/classification/README.md
@@ -43,7 +43,7 @@ bash ./tools/dist_train.sh \
[optional arguments]
```
-More details can be found in [user guides](https://mmdetection.readthedocs.io/en/3.x/user_guides/train.html).
+More details can be found in [user guides](https://mmdetection.readthedocs.io/en/latest/user_guides/train.html).
## Results and Models
diff --git a/configs/scnet/README.md b/configs/scnet/README.md
index 090827d048a..08dbfa87f56 100644
--- a/configs/scnet/README.md
+++ b/configs/scnet/README.md
@@ -46,7 +46,7 @@ The results on COCO 2017val are shown in the below table. (results on test-dev a
### Notes
-- Training hyper-parameters are identical to those of [HTC](https://github.com/open-mmlab/mmdetection/tree/master/configs/htc).
+- Training hyper-parameters are identical to those of [HTC](https://github.com/open-mmlab/mmdetection/tree/main/configs/htc).
- TTA means Test Time Augmentation, which applies horizontal flip and multi-scale testing. Refer to [config](./scnet_r50_fpn_1x_coco.py).
## Citation
diff --git a/docker/Dockerfile b/docker/Dockerfile
index 4c804044c7a..2737ec0efce 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -29,11 +29,11 @@ RUN apt-get update \
# Install MMEngine and MMCV
RUN pip install openmim && \
- mim install "mmengine>=0.6.0" "mmcv>=2.0.0rc4"
+ mim install "mmengine>=0.7.1" "mmcv>=2.0.0rc4"
# Install MMDetection
RUN conda clean --all \
- && git clone https://github.com/open-mmlab/mmdetection.git -b 3.x /mmdetection \
+ && git clone https://github.com/open-mmlab/mmdetection.git /mmdetection \
&& cd /mmdetection \
&& pip install --no-cache-dir -e .
diff --git a/docker/serve/Dockerfile b/docker/serve/Dockerfile
index 7a215f935ab..9a6a7784a2f 100644
--- a/docker/serve/Dockerfile
+++ b/docker/serve/Dockerfile
@@ -4,7 +4,7 @@ ARG CUDNN="8"
FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
ARG MMCV="2.0.0rc4"
-ARG MMDET="3.0.0rc6"
+ARG MMDET="3.0.0"
ENV PYTHONUNBUFFERED TRUE
diff --git a/docker/serve_cn/Dockerfile b/docker/serve_cn/Dockerfile
index 7812d8b7198..b1dfb00b869 100644
--- a/docker/serve_cn/Dockerfile
+++ b/docker/serve_cn/Dockerfile
@@ -4,7 +4,7 @@ ARG CUDNN="8"
FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
ARG MMCV="2.0.0rc4"
-ARG MMDET="3.0.0rc6"
+ARG MMDET="3.0.0"
ENV PYTHONUNBUFFERED TRUE
diff --git a/docs/en/advanced_guides/how_to.md b/docs/en/advanced_guides/how_to.md
index f038f297445..8b19fc9db5b 100644
--- a/docs/en/advanced_guides/how_to.md
+++ b/docs/en/advanced_guides/how_to.md
@@ -37,7 +37,7 @@ model = dict(
MMClassification also provides a wrapper for the PyTorch Image Models (timm) backbone network, users can directly use the backbone network in timm through MMClassification. Suppose you want to use [EfficientNet-B1](../../../configs/timm_example/retinanet_timm-efficientnet-b1_fpn_1x_coco.py) as the backbone network of RetinaNet, the example config is as the following.
```python
-# https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/timm_example/retinanet_timm-efficientnet-b1_fpn_1x_coco.py
+# https://github.com/open-mmlab/mmdetection/blob/main/configs/timm_example/retinanet_timm-efficientnet-b1_fpn_1x_coco.py
_base_ = [
'../_base_/models/retinanet_r50_fpn.py',
diff --git a/docs/en/conf.py b/docs/en/conf.py
index e902e3fa8b1..d2beaf1e5c1 100644
--- a/docs/en/conf.py
+++ b/docs/en/conf.py
@@ -2,7 +2,7 @@
#
# This file only contains a selection of the most common options. For a full
# list see the documentation:
-# https://www.sphinx-doc.org/en/master/usage/configuration.html
+# https://www.sphinx-doc.org/en/main/usage/configuration.html
# -- Path setup --------------------------------------------------------------
@@ -67,7 +67,7 @@ def get_version():
'.md': 'markdown',
}
-# The master toctree document.
+# The main toctree document.
master_doc = 'index'
# List of patterns, relative to source directory, that match files and
diff --git a/docs/en/get_started.md b/docs/en/get_started.md
index 303da496ae6..fd342847670 100644
--- a/docs/en/get_started.md
+++ b/docs/en/get_started.md
@@ -54,8 +54,7 @@ mim install "mmcv>=2.0.0rc1"
Case a: If you develop and run mmdet directly, install it from source:
```shell
-git clone https://github.com/open-mmlab/mmdetection.git -b 3.x
-# "-b 3.x" means checkout to the `3.x` branch.
+git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
# "-v" means verbose, or more output
@@ -66,7 +65,7 @@ pip install -v -e .
Case b: If you use mmdet as a dependency or third-party package, install it with MIM:
```shell
-mim install "mmdet>=3.0.0rc0"
+mim install mmdet
```
## Verify the installation
@@ -186,7 +185,7 @@ thus we only need to install MMEngine, MMCV, and MMDetection with the following
**Step 2.** Install MMDetection from the source.
```shell
-!git clone https://github.com/open-mmlab/mmdetection.git -b 3.x
+!git clone https://github.com/open-mmlab/mmdetection.git
%cd mmdetection
!pip install -e .
```
diff --git a/docs/en/model_zoo.md b/docs/en/model_zoo.md
index fcacdb0f35a..15dd7b2fb5b 100644
--- a/docs/en/model_zoo.md
+++ b/docs/en/model_zoo.md
@@ -10,7 +10,7 @@ We only use aliyun to maintain the model zoo since MMDetection V2.0. The model z
- We use distributed training.
- All pytorch-style pretrained backbones on ImageNet are from PyTorch model zoo, caffe-style pretrained backbones are converted from the newly released model from detectron2.
- For fair comparison with other codebases, we report the GPU memory as the maximum value of `torch.cuda.max_memory_allocated()` for all 8 GPUs. Note that this value is usually less than what `nvidia-smi` shows.
-- We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/benchmark.py) which computes the average time on 2000 images.
+- We report the inference time as the total time of network forwarding and post-processing, excluding the data loading time. Results are obtained with the script [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/main/tools/analysis_tools/benchmark.py) which computes the average time on 2000 images.
## ImageNet Pretrained Models
@@ -37,244 +37,244 @@ The detailed table of the commonly used backbone models in MMDetection is listed
### RPN
-Please refer to [RPN](https://github.com/open-mmlab/mmdetection/blob/master/configs/rpn) for details.
+Please refer to [RPN](https://github.com/open-mmlab/mmdetection/blob/main/configs/rpn) for details.
### Faster R-CNN
-Please refer to [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn) for details.
+Please refer to [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/faster_rcnn) for details.
### Mask R-CNN
-Please refer to [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn) for details.
+Please refer to [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask_rcnn) for details.
### Fast R-CNN (with pre-computed proposals)
-Please refer to [Fast R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/fast_rcnn) for details.
+Please refer to [Fast R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/fast_rcnn) for details.
### RetinaNet
-Please refer to [RetinaNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/retinanet) for details.
+Please refer to [RetinaNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/retinanet) for details.
### Cascade R-CNN and Cascade Mask R-CNN
-Please refer to [Cascade R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/cascade_rcnn) for details.
+Please refer to [Cascade R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/cascade_rcnn) for details.
### Hybrid Task Cascade (HTC)
-Please refer to [HTC](https://github.com/open-mmlab/mmdetection/blob/master/configs/htc) for details.
+Please refer to [HTC](https://github.com/open-mmlab/mmdetection/blob/main/configs/htc) for details.
### SSD
-Please refer to [SSD](https://github.com/open-mmlab/mmdetection/blob/master/configs/ssd) for details.
+Please refer to [SSD](https://github.com/open-mmlab/mmdetection/blob/main/configs/ssd) for details.
### Group Normalization (GN)
-Please refer to [Group Normalization](https://github.com/open-mmlab/mmdetection/blob/master/configs/gn) for details.
+Please refer to [Group Normalization](https://github.com/open-mmlab/mmdetection/blob/main/configs/gn) for details.
### Weight Standardization
-Please refer to [Weight Standardization](https://github.com/open-mmlab/mmdetection/blob/master/configs/gn+ws) for details.
+Please refer to [Weight Standardization](https://github.com/open-mmlab/mmdetection/blob/main/configs/gn+ws) for details.
### Deformable Convolution v2
-Please refer to [Deformable Convolutional Networks](https://github.com/open-mmlab/mmdetection/blob/master/configs/dcn) for details.
+Please refer to [Deformable Convolutional Networks](https://github.com/open-mmlab/mmdetection/blob/main/configs/dcn) for details.
### CARAFE: Content-Aware ReAssembly of FEatures
-Please refer to [CARAFE](https://github.com/open-mmlab/mmdetection/blob/master/configs/carafe) for details.
+Please refer to [CARAFE](https://github.com/open-mmlab/mmdetection/blob/main/configs/carafe) for details.
### Instaboost
-Please refer to [Instaboost](https://github.com/open-mmlab/mmdetection/blob/master/configs/instaboost) for details.
+Please refer to [Instaboost](https://github.com/open-mmlab/mmdetection/blob/main/configs/instaboost) for details.
### Libra R-CNN
-Please refer to [Libra R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/libra_rcnn) for details.
+Please refer to [Libra R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/libra_rcnn) for details.
### Guided Anchoring
-Please refer to [Guided Anchoring](https://github.com/open-mmlab/mmdetection/blob/master/configs/guided_anchoring) for details.
+Please refer to [Guided Anchoring](https://github.com/open-mmlab/mmdetection/blob/main/configs/guided_anchoring) for details.
### FCOS
-Please refer to [FCOS](https://github.com/open-mmlab/mmdetection/blob/master/configs/fcos) for details.
+Please refer to [FCOS](https://github.com/open-mmlab/mmdetection/blob/main/configs/fcos) for details.
### FoveaBox
-Please refer to [FoveaBox](https://github.com/open-mmlab/mmdetection/blob/master/configs/foveabox) for details.
+Please refer to [FoveaBox](https://github.com/open-mmlab/mmdetection/blob/main/configs/foveabox) for details.
### RepPoints
-Please refer to [RepPoints](https://github.com/open-mmlab/mmdetection/blob/master/configs/reppoints) for details.
+Please refer to [RepPoints](https://github.com/open-mmlab/mmdetection/blob/main/configs/reppoints) for details.
### FreeAnchor
-Please refer to [FreeAnchor](https://github.com/open-mmlab/mmdetection/blob/master/configs/free_anchor) for details.
+Please refer to [FreeAnchor](https://github.com/open-mmlab/mmdetection/blob/main/configs/free_anchor) for details.
### Grid R-CNN (plus)
-Please refer to [Grid R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/grid_rcnn) for details.
+Please refer to [Grid R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/grid_rcnn) for details.
### GHM
-Please refer to [GHM](https://github.com/open-mmlab/mmdetection/blob/master/configs/ghm) for details.
+Please refer to [GHM](https://github.com/open-mmlab/mmdetection/blob/main/configs/ghm) for details.
### GCNet
-Please refer to [GCNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/gcnet) for details.
+Please refer to [GCNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/gcnet) for details.
### HRNet
-Please refer to [HRNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/hrnet) for details.
+Please refer to [HRNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/hrnet) for details.
### Mask Scoring R-CNN
-Please refer to [Mask Scoring R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/ms_rcnn) for details.
+Please refer to [Mask Scoring R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/ms_rcnn) for details.
### Train from Scratch
-Please refer to [Rethinking ImageNet Pre-training](https://github.com/open-mmlab/mmdetection/blob/master/configs/scratch) for details.
+Please refer to [Rethinking ImageNet Pre-training](https://github.com/open-mmlab/mmdetection/blob/main/configs/scratch) for details.
### NAS-FPN
-Please refer to [NAS-FPN](https://github.com/open-mmlab/mmdetection/blob/master/configs/nas_fpn) for details.
+Please refer to [NAS-FPN](https://github.com/open-mmlab/mmdetection/blob/main/configs/nas_fpn) for details.
### ATSS
-Please refer to [ATSS](https://github.com/open-mmlab/mmdetection/blob/master/configs/atss) for details.
+Please refer to [ATSS](https://github.com/open-mmlab/mmdetection/blob/main/configs/atss) for details.
### FSAF
-Please refer to [FSAF](https://github.com/open-mmlab/mmdetection/blob/master/configs/fsaf) for details.
+Please refer to [FSAF](https://github.com/open-mmlab/mmdetection/blob/main/configs/fsaf) for details.
### RegNetX
-Please refer to [RegNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/regnet) for details.
+Please refer to [RegNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/regnet) for details.
### Res2Net
-Please refer to [Res2Net](https://github.com/open-mmlab/mmdetection/blob/master/configs/res2net) for details.
+Please refer to [Res2Net](https://github.com/open-mmlab/mmdetection/blob/main/configs/res2net) for details.
### GRoIE
-Please refer to [GRoIE](https://github.com/open-mmlab/mmdetection/blob/master/configs/groie) for details.
+Please refer to [GRoIE](https://github.com/open-mmlab/mmdetection/blob/main/configs/groie) for details.
### Dynamic R-CNN
-Please refer to [Dynamic R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/dynamic_rcnn) for details.
+Please refer to [Dynamic R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/dynamic_rcnn) for details.
### PointRend
-Please refer to [PointRend](https://github.com/open-mmlab/mmdetection/blob/master/configs/point_rend) for details.
+Please refer to [PointRend](https://github.com/open-mmlab/mmdetection/blob/main/configs/point_rend) for details.
### DetectoRS
-Please refer to [DetectoRS](https://github.com/open-mmlab/mmdetection/blob/master/configs/detectors) for details.
+Please refer to [DetectoRS](https://github.com/open-mmlab/mmdetection/blob/main/configs/detectors) for details.
### Generalized Focal Loss
-Please refer to [Generalized Focal Loss](https://github.com/open-mmlab/mmdetection/blob/master/configs/gfl) for details.
+Please refer to [Generalized Focal Loss](https://github.com/open-mmlab/mmdetection/blob/main/configs/gfl) for details.
### CornerNet
-Please refer to [CornerNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/cornernet) for details.
+Please refer to [CornerNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/cornernet) for details.
### YOLOv3
-Please refer to [YOLOv3](https://github.com/open-mmlab/mmdetection/blob/master/configs/yolo) for details.
+Please refer to [YOLOv3](https://github.com/open-mmlab/mmdetection/blob/main/configs/yolo) for details.
### PAA
-Please refer to [PAA](https://github.com/open-mmlab/mmdetection/blob/master/configs/paa) for details.
+Please refer to [PAA](https://github.com/open-mmlab/mmdetection/blob/main/configs/paa) for details.
### SABL
-Please refer to [SABL](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl) for details.
+Please refer to [SABL](https://github.com/open-mmlab/mmdetection/blob/main/configs/sabl) for details.
### CentripetalNet
-Please refer to [CentripetalNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/centripetalnet) for details.
+Please refer to [CentripetalNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/centripetalnet) for details.
### ResNeSt
-Please refer to [ResNeSt](https://github.com/open-mmlab/mmdetection/blob/master/configs/resnest) for details.
+Please refer to [ResNeSt](https://github.com/open-mmlab/mmdetection/blob/main/configs/resnest) for details.
### DETR
-Please refer to [DETR](https://github.com/open-mmlab/mmdetection/blob/master/configs/detr) for details.
+Please refer to [DETR](https://github.com/open-mmlab/mmdetection/blob/main/configs/detr) for details.
### Deformable DETR
-Please refer to [Deformable DETR](https://github.com/open-mmlab/mmdetection/blob/master/configs/deformable_detr) for details.
+Please refer to [Deformable DETR](https://github.com/open-mmlab/mmdetection/blob/main/configs/deformable_detr) for details.
### AutoAssign
-Please refer to [AutoAssign](https://github.com/open-mmlab/mmdetection/blob/master/configs/autoassign) for details.
+Please refer to [AutoAssign](https://github.com/open-mmlab/mmdetection/blob/main/configs/autoassign) for details.
### YOLOF
-Please refer to [YOLOF](https://github.com/open-mmlab/mmdetection/blob/master/configs/yolof) for details.
+Please refer to [YOLOF](https://github.com/open-mmlab/mmdetection/blob/main/configs/yolof) for details.
### Seesaw Loss
-Please refer to [Seesaw Loss](https://github.com/open-mmlab/mmdetection/blob/master/configs/seesaw_loss) for details.
+Please refer to [Seesaw Loss](https://github.com/open-mmlab/mmdetection/blob/main/configs/seesaw_loss) for details.
### CenterNet
-Please refer to [CenterNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/centernet) for details.
+Please refer to [CenterNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/centernet) for details.
### YOLOX
-Please refer to [YOLOX](https://github.com/open-mmlab/mmdetection/blob/master/configs/yolox) for details.
+Please refer to [YOLOX](https://github.com/open-mmlab/mmdetection/blob/main/configs/yolox) for details.
### PVT
-Please refer to [PVT](https://github.com/open-mmlab/mmdetection/blob/master/configs/pvt) for details.
+Please refer to [PVT](https://github.com/open-mmlab/mmdetection/blob/main/configs/pvt) for details.
### SOLO
-Please refer to [SOLO](https://github.com/open-mmlab/mmdetection/blob/master/configs/solo) for details.
+Please refer to [SOLO](https://github.com/open-mmlab/mmdetection/blob/main/configs/solo) for details.
### QueryInst
-Please refer to [QueryInst](https://github.com/open-mmlab/mmdetection/blob/master/configs/queryinst) for details.
+Please refer to [QueryInst](https://github.com/open-mmlab/mmdetection/blob/main/configs/queryinst) for details.
### PanopticFPN
-Please refer to [PanopticFPN](https://github.com/open-mmlab/mmdetection/blob/master/configs/panoptic_fpn) for details.
+Please refer to [PanopticFPN](https://github.com/open-mmlab/mmdetection/blob/main/configs/panoptic_fpn) for details.
### MaskFormer
-Please refer to [MaskFormer](https://github.com/open-mmlab/mmdetection/blob/master/configs/maskformer) for details.
+Please refer to [MaskFormer](https://github.com/open-mmlab/mmdetection/blob/main/configs/maskformer) for details.
### DyHead
-Please refer to [DyHead](https://github.com/open-mmlab/mmdetection/blob/master/configs/dyhead) for details.
+Please refer to [DyHead](https://github.com/open-mmlab/mmdetection/blob/main/configs/dyhead) for details.
### Mask2Former
-Please refer to [Mask2Former](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask2former) for details.
+Please refer to [Mask2Former](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask2former) for details.
### Efficientnet
-Please refer to [Efficientnet](https://github.com/open-mmlab/mmdetection/blob/master/configs/efficientnet) for details.
+Please refer to [Efficientnet](https://github.com/open-mmlab/mmdetection/blob/main/configs/efficientnet) for details.
### Other datasets
-We also benchmark some methods on [PASCAL VOC](https://github.com/open-mmlab/mmdetection/blob/master/configs/pascal_voc), [Cityscapes](https://github.com/open-mmlab/mmdetection/blob/master/configs/cityscapes), [OpenImages](https://github.com/open-mmlab/mmdetection/blob/master/configs/openimages) and [WIDER FACE](https://github.com/open-mmlab/mmdetection/blob/master/configs/wider_face).
+We also benchmark some methods on [PASCAL VOC](https://github.com/open-mmlab/mmdetection/blob/main/configs/pascal_voc), [Cityscapes](https://github.com/open-mmlab/mmdetection/blob/main/configs/cityscapes), [OpenImages](https://github.com/open-mmlab/mmdetection/blob/main/configs/openimages) and [WIDER FACE](https://github.com/open-mmlab/mmdetection/blob/main/configs/wider_face).
### Pre-trained Models
-We also train [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn) and [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn) using ResNet-50 and [RegNetX-3.2G](https://github.com/open-mmlab/mmdetection/blob/master/configs/regnet) with multi-scale training and longer schedules. These models serve as strong pre-trained models for downstream tasks for convenience.
+We also train [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/faster_rcnn) and [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask_rcnn) using ResNet-50 and [RegNetX-3.2G](https://github.com/open-mmlab/mmdetection/blob/main/configs/regnet) with multi-scale training and longer schedules. These models serve as strong pre-trained models for downstream tasks for convenience.
## Speed benchmark
### Training Speed benchmark
-We provide [analyze_logs.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/analyze_logs.py) to get average time of iteration in training. You can find examples in [Log Analysis](https://mmdetection.readthedocs.io/en/latest/useful_tools.html#log-analysis).
+We provide [analyze_logs.py](https://github.com/open-mmlab/mmdetection/blob/main/tools/analysis_tools/analyze_logs.py) to get average time of iteration in training. You can find examples in [Log Analysis](https://mmdetection.readthedocs.io/en/latest/useful_tools.html#log-analysis).
-We compare the training speed of Mask R-CNN with some other popular frameworks (The data is copied from [detectron2](https://github.com/facebookresearch/detectron2/blob/master/docs/notes/benchmarks.md/)).
-For mmdetection, we benchmark with [mask_rcnn_r50_caffe_fpn_poly_1x_coco_v1.py](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_poly_1x_coco_v1.py), which should have the same setting with [mask_rcnn_R_50_FPN_noaug_1x.yaml](https://github.com/facebookresearch/detectron2/blob/master/configs/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x.yaml) of detectron2.
+We compare the training speed of Mask R-CNN with some other popular frameworks (The data is copied from [detectron2](https://github.com/facebookresearch/detectron2/blob/main/docs/notes/benchmarks.md/)).
+For mmdetection, we benchmark with [mask-rcnn_r50-caffe_fpn_poly-1x_coco_v1.py](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask_rcnn/mask-rcnn_r50-caffe_fpn_poly-1x_coco_v1.py), which should have the same setting with [mask_rcnn_R_50_FPN_noaug_1x.yaml](https://github.com/facebookresearch/detectron2/blob/main/configs/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x.yaml) of detectron2.
We also provide the [checkpoint](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug_compare_20200518-10127928.pth) and [training log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug_20200518_105755.log.json) for reference. The throughput is computed as the average throughput in iterations 100-500 to skip GPU warmup time.
| Implementation | Throughput (img/s) |
@@ -289,7 +289,7 @@ We also provide the [checkpoint](https://download.openmmlab.com/mmdetection/v2.0
### Inference Speed Benchmark
-We provide [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/benchmark.py) to benchmark the inference latency.
+We provide [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/main/tools/analysis_tools/benchmark.py) to benchmark the inference latency.
The script benchmarkes the model with 2000 images and calculates the average time ignoring first 5 times. You can change the output log interval (defaults: 50) by setting `LOG-INTERVAL`.
```shell
@@ -319,11 +319,11 @@ For fair comparison, we install and run both frameworks on the same machine.
### Performance
-| Type | Lr schd | Detectron2 | mmdetection | Download |
-| -------------------------------------------------------------------------------------------------------------------------------------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py) | 1x | [37.9](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml) | 38.0 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-5324cff8.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco_20200429_234554.log.json) |
-| [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py) | 1x | [38.6 & 35.2](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml) | 38.8 & 35.4 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco-dbecf295.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco_20200430_054239.log.json) |
-| [Retinanet](https://github.com/open-mmlab/mmdetection/blob/master/configs/retinanet/retinanet_r50_caffe_fpn_mstrain_1x_coco.py) | 1x | [36.5](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-Detection/retinanet_R_50_FPN_1x.yaml) | 37.0 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/retinanet_r50_caffe_fpn_mstrain_1x_coco/retinanet_r50_caffe_fpn_mstrain_1x_coco-586977a0.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/retinanet_r50_caffe_fpn_mstrain_1x_coco/retinanet_r50_caffe_fpn_mstrain_1x_coco_20200430_014748.log.json) |
+| Type | Lr schd | Detectron2 | mmdetection | Download |
+| ------------------------------------------------------------------------------------------------------------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------ | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py) | 1x | [37.9](https://github.com/facebookresearch/detectron2/blob/main/configs/COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml) | 38.0 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-5324cff8.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco_20200429_234554.log.json) |
+| [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask_rcnn/mask-rcnn_r50-caffe_fpn_ms-poly-1x_coco.py) | 1x | [38.6 & 35.2](https://github.com/facebookresearch/detectron2/blob/main/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml) | 38.8 & 35.4 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco-dbecf295.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco_20200430_054239.log.json) |
+| [Retinanet](https://github.com/open-mmlab/mmdetection/blob/main/configs/retinanet/retinanet_r50-caffe_fpn_ms-1x_coco.py) | 1x | [36.5](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-Detection/retinanet_R_50_FPN_1x.yaml) | 37.0 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/retinanet_r50_caffe_fpn_mstrain_1x_coco/retinanet_r50_caffe_fpn_mstrain_1x_coco-586977a0.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/retinanet_r50_caffe_fpn_mstrain_1x_coco/retinanet_r50_caffe_fpn_mstrain_1x_coco_20200430_014748.log.json) |
### Training Speed
diff --git a/docs/en/notes/changelog.md b/docs/en/notes/changelog.md
index 4e8bb27a742..ded9dc30189 100644
--- a/docs/en/notes/changelog.md
+++ b/docs/en/notes/changelog.md
@@ -1,5 +1,51 @@
# Changelog of v3.x
+## v3.0.0 (6/4/2023)
+
+### Highlights
+
+- Support Semi-automatic annotation Base [Label-Studio](../../../projects/LabelStudio) (#10039)
+- Support [EfficientDet](../../../projects/EfficientDet) in projects (#9810)
+
+### New Features
+
+- File I/O migration and reconstruction (#9709)
+- Release DINO Swin-L 36e model (#9927)
+
+### Bug Fixes
+
+- Fix benchmark script (#9865)
+- Fix the crop method of PolygonMasks (#9858)
+- Fix Albu augmentation with the mask shape (#9918)
+- Fix `RTMDetIns` prior generator device error (#9964)
+- Fix `img_shape` in data pipeline (#9966)
+- Fix cityscapes import error (#9984)
+- Fix `solov2_r50_fpn_ms-3x_coco.py` config error (#10030)
+- Fix Conditional DETR AP and Log (#9889)
+- Fix accepting an unexpected argument local-rank in PyTorch 2.0 (#10050)
+- Fix `common/ms_3x_coco-instance.py` config error (#10056)
+- Fix compute flops error (#10051)
+- Delete `data_root` in `CocoOccludedSeparatedMetric` to fix bug (#9969)
+- Unifying metafile.yml (#9849)
+
+### Improvements
+
+- Added BoxInst r101 config (#9967)
+- Added config migration guide (#9960)
+- Added more social networking links (#10021)
+- Added RTMDet config introduce (#10042)
+- Added visualization docs (#9938, #10058)
+- Refined data_prepare docs (#9935)
+- Added support for setting the cache_size_limit parameter of dynamo in PyTorch 2.0 (#10054)
+- Updated coco_metric.py (#10033)
+- Update type hint (#10040)
+
+### Contributors
+
+A total of 19 developers contributed to this release.
+
+Thanks @IRONICBo, @vansin, @RangeKing, @Ghlerrix, @okotaku, @JosonChan1998, @zgzhengSE, @bobo0810, @yechenzh, @Zheng-LinXiao, @LYMDLUT, @yarkable, @xiejiajiannb, @chhluo, @BIGWangYuDong, @RangiLy, @zwhus, @hhaAndroid, @ZwwWayne
+
## v3.0.0rc6 (24/2/2023)
### Highlights
diff --git a/docs/en/notes/changelog_v2.x.md b/docs/en/notes/changelog_v2.x.md
index af2e048f7b2..2b3a230c0d9 100644
--- a/docs/en/notes/changelog_v2.x.md
+++ b/docs/en/notes/changelog_v2.x.md
@@ -133,7 +133,7 @@ Thanks @ZwwWayne, @DarthThomas, @solyaH, @LutingWang, @chenxinfeng4, @Czm369, @C
data=dict(train_dataloader=dict(class_aware_sampler=dict(num_sample_class=1))))
```
- in the config to use `ClassAwareSampler`. Examples can be found in [the configs of OpenImages Dataset](https://github.com/open-mmlab/mmdetection/tree/master/configs/openimages/faster_rcnn_r50_fpn_32x2_cas_1x_openimages.py). (#7436)
+ in the config to use `ClassAwareSampler`. Examples can be found in [the configs of OpenImages Dataset](https://github.com/open-mmlab/mmdetection/tree/main/configs/openimages/faster_rcnn_r50_fpn_32x2_cas_1x_openimages.py). (#7436)
- Support automatically scaling LR according to GPU number and samples per GPU. (#7482)
In each config, there is a corresponding config of auto-scaling LR as below,
diff --git a/docs/en/notes/compatibility.md b/docs/en/notes/compatibility.md
index a545a495fd3..26325e249dc 100644
--- a/docs/en/notes/compatibility.md
+++ b/docs/en/notes/compatibility.md
@@ -75,7 +75,7 @@ MMDetection v2.12.0 relies on the newest features in MMCV 1.3.3, including `Base
### Unified model initialization
-To unify the parameter initialization in OpenMMLab projects, MMCV supports `BaseModule` that accepts `init_cfg` to allow the modules' parameters initialized in a flexible and unified manner. Now the users need to explicitly call `model.init_weights()` in the training script to initialize the model (as in [here](https://github.com/open-mmlab/mmdetection/blob/master/tools/train.py#L162), previously this was handled by the detector. **The downstream projects must update their model initialization accordingly to use MMDetection v2.12.0**. Please refer to PR #4750 for details.
+To unify the parameter initialization in OpenMMLab projects, MMCV supports `BaseModule` that accepts `init_cfg` to allow the modules' parameters initialized in a flexible and unified manner. Now the users need to explicitly call `model.init_weights()` in the training script to initialize the model (as in [here](https://github.com/open-mmlab/mmdetection/blob/main/tools/train.py#L162), previously this was handled by the detector. **The downstream projects must update their model initialization accordingly to use MMDetection v2.12.0**. Please refer to PR #4750 for details.
### Unified model registry
diff --git a/docs/en/notes/faq.md b/docs/en/notes/faq.md
index e1948125401..d87387ac5d2 100644
--- a/docs/en/notes/faq.md
+++ b/docs/en/notes/faq.md
@@ -1,6 +1,6 @@
# Frequently Asked Questions
-We list some common troubles faced by many users and their corresponding solutions here. Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them. If the contents here do not cover your issue, please create an issue using the [provided templates](https://github.com/open-mmlab/mmdetection/blob/master/.github/ISSUE_TEMPLATE/error-report.md/) and make sure you fill in all required information in the template.
+We list some common troubles faced by many users and their corresponding solutions here. Feel free to enrich the list if you find any frequent issues and have ways to help others to solve them. If the contents here do not cover your issue, please create an issue using the [provided templates](https://github.com/open-mmlab/mmdetection/blob/main/.github/ISSUE_TEMPLATE/error-report.md/) and make sure you fill in all required information in the template.
## PyTorch 2.0 Support
@@ -40,25 +40,26 @@ About the common questions about PyTorch 2.0's dynamo, you can refer to [here](h
## Installation
-- Compatibility issue between MMCV and MMDetection; "ConvWS is already registered in conv layer"; "AssertionError: MMCV==xxx is used but incompatible. Please install mmcv>=xxx, \<=xxx."
+Compatibility issue between MMCV and MMDetection; "ConvWS is already registered in conv layer"; "AssertionError: MMCV==xxx is used but incompatible. Please install mmcv>=xxx, \<=xxx."
- Compatible MMDetection, MMEngine, and MMCV versions are shown as below. Please choose the correct version of MMCV to avoid installation issues.
+Compatible MMDetection, MMEngine, and MMCV versions are shown as below. Please choose the correct version of MMCV to avoid installation issues.
- | MMDetection version | MMCV version | MMEngine version |
- | :-----------------: | :---------------------: | :----------------------: |
- | 3.x | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.6.0, \<1.0.0 |
- | 3.0.0rc6 | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.6.0, \<1.0.0 |
- | 3.0.0rc5 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |
- | 3.0.0rc4 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |
- | 3.0.0rc3 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |
- | 3.0.0rc2 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.1.0, \<1.0.0 |
- | 3.0.0rc1 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.1.0, \<1.0.0 |
- | 3.0.0rc0 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.1.0, \<1.0.0 |
+| MMDetection version | MMCV version | MMEngine version |
+| :-----------------: | :---------------------: | :----------------------: |
+| main | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 |
+| 3.x | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 |
+| 3.0.0rc6 | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.6.0, \<1.0.0 |
+| 3.0.0rc5 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |
+| 3.0.0rc4 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |
+| 3.0.0rc3 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |
+| 3.0.0rc2 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.1.0, \<1.0.0 |
+| 3.0.0rc1 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.1.0, \<1.0.0 |
+| 3.0.0rc0 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.1.0, \<1.0.0 |
- **Note:**
+**Note:**
- 1. If you want to install mmdet-v2.x, the compatible MMDetection and MMCV versions table can be found at [here](https://mmdetection.readthedocs.io/en/stable/faq.html#installation). Please choose the correct version of MMCV to avoid installation issues.
- 2. In MMCV-v2.x, `mmcv-full` is rename to `mmcv`, if you want to install `mmcv` without CUDA ops, you can install `mmcv-lite`.
+1. If you want to install mmdet-v2.x, the compatible MMDetection and MMCV versions table can be found at [here](https://mmdetection.readthedocs.io/en/stable/faq.html#installation). Please choose the correct version of MMCV to avoid installation issues.
+2. In MMCV-v2.x, `mmcv-full` is rename to `mmcv`, if you want to install `mmcv` without CUDA ops, you can install `mmcv-lite`.
- "No module named 'mmcv.ops'"; "No module named 'mmcv.\_ext'".
diff --git a/docs/en/overview.md b/docs/en/overview.md
index 39fb6f51564..7c7d96b7087 100644
--- a/docs/en/overview.md
+++ b/docs/en/overview.md
@@ -42,13 +42,13 @@ Here is a detailed step-by-step guide to learn more about MMDetection:
2. Refer to the below tutorials for the basic usage of MMDetection.
- - [Train and Test](https://mmdetection.readthedocs.io/en/dev-3.x/user_guides/index.html#train-test)
+ - [Train and Test](https://mmdetection.readthedocs.io/en/latest/user_guides/index.html#train-test)
- - [Useful Tools](https://mmdetection.readthedocs.io/en/dev-3.x/user_guides/index.html#useful-tools)
+ - [Useful Tools](https://mmdetection.readthedocs.io/en/latest/user_guides/index.html#useful-tools)
3. Refer to the below tutorials to dive deeper:
- - [Basic Concepts](https://mmdetection.readthedocs.io/en/dev-3.x/advanced_guides/index.html#basic-concepts)
- - [Component Customization](https://mmdetection.readthedocs.io/en/dev-3.x/advanced_guides/index.html#component-customization)
+ - [Basic Concepts](https://mmdetection.readthedocs.io/en/latest/advanced_guides/index.html#basic-concepts)
+ - [Component Customization](https://mmdetection.readthedocs.io/en/latest/advanced_guides/index.html#component-customization)
4. For users of MMDetection 2.x version, we provide a guide to help you adapt to the new version. You can find it in the [migration guide](./migration/migration.md).
diff --git a/docs/en/stat.py b/docs/en/stat.py
index 44f03b6616c..f0589e337e0 100755
--- a/docs/en/stat.py
+++ b/docs/en/stat.py
@@ -6,7 +6,7 @@
import numpy as np
-url_prefix = 'https://github.com/open-mmlab/mmdetection/blob/3.x/configs'
+url_prefix = 'https://github.com/open-mmlab/mmdetection/blob/main/configs'
files = sorted(glob.glob('../../configs/*/README.md'))
diff --git a/docs/en/user_guides/config.md b/docs/en/user_guides/config.md
index 2ee3bc9bf68..69bd91194e0 100644
--- a/docs/en/user_guides/config.md
+++ b/docs/en/user_guides/config.md
@@ -14,14 +14,14 @@ In MMDetection's config, we use `model` to set up detection algorithm components
model = dict(
type='MaskRCNN', # The name of detector
data_preprocessor=dict( # The config of data preprocessor, usually includes image normalization and padding
- type='DetDataPreprocessor', # The type of the data preprocessor, refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.data_preprocessors.DetDataPreprocessor
+ type='DetDataPreprocessor', # The type of the data preprocessor, refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.data_preprocessors.DetDataPreprocessor
mean=[123.675, 116.28, 103.53], # Mean values used to pre-training the pre-trained backbone models, ordered in R, G, B
std=[58.395, 57.12, 57.375], # Standard variance used to pre-training the pre-trained backbone models, ordered in R, G, B
bgr_to_rgb=True, # whether to convert image from BGR to RGB
pad_mask=True, # whether to pad instance masks
pad_size_divisor=32), # The size of padded image should be divisible by ``pad_size_divisor``
backbone=dict( # The config of backbone
- type='ResNet', # The type of backbone network. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.backbones.ResNet
+ type='ResNet', # The type of backbone network. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.backbones.ResNet
depth=50, # The depth of backbone, usually it is 50 or 101 for ResNet and ResNext backbones.
num_stages=4, # Number of stages of the backbone.
out_indices=(0, 1, 2, 3), # The index of output feature maps produced in each stage
@@ -33,34 +33,34 @@ model = dict(
style='pytorch', # The style of backbone, 'pytorch' means that stride 2 layers are in 3x3 Conv, 'caffe' means stride 2 layers are in 1x1 Convs.
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), # The ImageNet pretrained backbone to be loaded
neck=dict(
- type='FPN', # The neck of detector is FPN. We also support 'NASFPN', 'PAFPN', etc. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.necks.FPN for more details.
+ type='FPN', # The neck of detector is FPN. We also support 'NASFPN', 'PAFPN', etc. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.necks.FPN for more details.
in_channels=[256, 512, 1024, 2048], # The input channels, this is consistent with the output channels of backbone
out_channels=256, # The output channels of each level of the pyramid feature map
num_outs=5), # The number of output scales
rpn_head=dict(
- type='RPNHead', # The type of RPN head is 'RPNHead', we also support 'GARPNHead', etc. Refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.dense_heads.RPNHead for more details.
+ type='RPNHead', # The type of RPN head is 'RPNHead', we also support 'GARPNHead', etc. Refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.dense_heads.RPNHead for more details.
in_channels=256, # The input channels of each input feature map, this is consistent with the output channels of neck
feat_channels=256, # Feature channels of convolutional layers in the head.
anchor_generator=dict( # The config of anchor generator
- type='AnchorGenerator', # Most of methods use AnchorGenerator, SSD Detectors uses `SSDAnchorGenerator`. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/prior_generators/anchor_generator.py#L18 for more details
+ type='AnchorGenerator', # Most of methods use AnchorGenerator, SSD Detectors uses `SSDAnchorGenerator`. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/prior_generators/anchor_generator.py#L18 for more details
scales=[8], # Basic scale of the anchor, the area of the anchor in one position of a feature map will be scale * base_sizes
ratios=[0.5, 1.0, 2.0], # The ratio between height and width.
strides=[4, 8, 16, 32, 64]), # The strides of the anchor generator. This is consistent with the FPN feature strides. The strides will be taken as base_sizes if base_sizes is not set.
bbox_coder=dict( # Config of box coder to encode and decode the boxes during training and testing
- type='DeltaXYWHBBoxCoder', # Type of box coder. 'DeltaXYWHBBoxCoder' is applied for most of the methods. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/coders/delta_xywh_bbox_coder.py#L13 for more details.
+ type='DeltaXYWHBBoxCoder', # Type of box coder. 'DeltaXYWHBBoxCoder' is applied for most of the methods. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/coders/delta_xywh_bbox_coder.py#L13 for more details.
target_means=[0.0, 0.0, 0.0, 0.0], # The target means used to encode and decode boxes
target_stds=[1.0, 1.0, 1.0, 1.0]), # The standard variance used to encode and decode boxes
loss_cls=dict( # Config of loss function for the classification branch
- type='CrossEntropyLoss', # Type of loss for classification branch, we also support FocalLoss etc. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/losses/cross_entropy_loss.py#L201 for more details
+ type='CrossEntropyLoss', # Type of loss for classification branch, we also support FocalLoss etc. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/losses/cross_entropy_loss.py#L201 for more details
use_sigmoid=True, # RPN usually performs two-class classification, so it usually uses the sigmoid function.
loss_weight=1.0), # Loss weight of the classification branch.
loss_bbox=dict( # Config of loss function for the regression branch.
- type='L1Loss', # Type of loss, we also support many IoU Losses and smooth L1-loss, etc. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/losses/smooth_l1_loss.py#L56 for implementation.
+ type='L1Loss', # Type of loss, we also support many IoU Losses and smooth L1-loss, etc. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/losses/smooth_l1_loss.py#L56 for implementation.
loss_weight=1.0)), # Loss weight of the regression branch.
roi_head=dict( # RoIHead encapsulates the second stage of two-stage/cascade detectors.
type='StandardRoIHead',
bbox_roi_extractor=dict( # RoI feature extractor for bbox regression.
- type='SingleRoIExtractor', # Type of the RoI feature extractor, most of methods uses SingleRoIExtractor. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py#L13 for details.
+ type='SingleRoIExtractor', # Type of the RoI feature extractor, most of methods uses SingleRoIExtractor. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py#L13 for details.
roi_layer=dict( # Config of RoI Layer
type='RoIAlign', # Type of RoI Layer, DeformRoIPoolingPack and ModulatedDeformRoIPoolingPack are also supported. Refer to https://mmcv.readthedocs.io/en/latest/api.html#mmcv.ops.RoIAlign for details.
output_size=7, # The output size of feature maps.
@@ -68,7 +68,7 @@ model = dict(
out_channels=256, # output channels of the extracted feature.
featmap_strides=[4, 8, 16, 32]), # Strides of multi-scale feature maps. It should be consistent with the architecture of the backbone.
bbox_head=dict( # Config of box head in the RoIHead.
- type='Shared2FCBBoxHead', # Type of the bbox head, Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py#L220 for implementation details.
+ type='Shared2FCBBoxHead', # Type of the bbox head, Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py#L220 for implementation details.
in_channels=256, # Input channels for bbox head. This is consistent with the out_channels in roi_extractor
fc_out_channels=1024, # Output feature channels of FC layers.
roi_feat_size=7, # Size of RoI features
@@ -94,7 +94,7 @@ model = dict(
out_channels=256, # Output channels of the extracted feature.
featmap_strides=[4, 8, 16, 32]), # Strides of multi-scale feature maps.
mask_head=dict( # Mask prediction head
- type='FCNMaskHead', # Type of mask head, refer to https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.roi_heads.FCNMaskHead for implementation details.
+ type='FCNMaskHead', # Type of mask head, refer to https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.roi_heads.FCNMaskHead for implementation details.
num_convs=4, # Number of convolutional layers in mask head.
in_channels=256, # Input channels, should be consistent with the output channels of mask roi extractor.
conv_out_channels=256, # Output channels of the convolutional layer.
@@ -106,14 +106,14 @@ model = dict(
train_cfg = dict( # Config of training hyperparameters for rpn and rcnn
rpn=dict( # Training config of rpn
assigner=dict( # Config of assigner
- type='MaxIoUAssigner', # Type of assigner, MaxIoUAssigner is used for many common detectors. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/assigners/max_iou_assigner.py#L14 for more details.
+ type='MaxIoUAssigner', # Type of assigner, MaxIoUAssigner is used for many common detectors. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/assigners/max_iou_assigner.py#L14 for more details.
pos_iou_thr=0.7, # IoU >= threshold 0.7 will be taken as positive samples
neg_iou_thr=0.3, # IoU < threshold 0.3 will be taken as negative samples
min_pos_iou=0.3, # The minimal IoU threshold to take boxes as positive samples
match_low_quality=True, # Whether to match the boxes under low quality (see API doc for more details).
ignore_iof_thr=-1), # IoF threshold for ignoring bboxes
sampler=dict( # Config of positive/negative sampler
- type='RandomSampler', # Type of sampler, PseudoSampler and other samplers are also supported. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/samplers/random_sampler.py#L14 for implementation details.
+ type='RandomSampler', # Type of sampler, PseudoSampler and other samplers are also supported. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/samplers/random_sampler.py#L14 for implementation details.
num=256, # Number of samples
pos_fraction=0.5, # The ratio of positive samples in the total samples.
neg_pos_ub=-1, # The upper bound of negative samples based on the number of positive samples.
@@ -133,14 +133,14 @@ model = dict(
min_bbox_size=0), # The allowed minimal box size
rcnn=dict( # The config for the roi heads.
assigner=dict( # Config of assigner for second stage, this is different for that in rpn
- type='MaxIoUAssigner', # Type of assigner, MaxIoUAssigner is used for all roi_heads for now. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/assigners/max_iou_assigner.py#L14 for more details.
+ type='MaxIoUAssigner', # Type of assigner, MaxIoUAssigner is used for all roi_heads for now. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/assigners/max_iou_assigner.py#L14 for more details.
pos_iou_thr=0.5, # IoU >= threshold 0.5 will be taken as positive samples
neg_iou_thr=0.5, # IoU < threshold 0.5 will be taken as negative samples
min_pos_iou=0.5, # The minimal IoU threshold to take boxes as positive samples
match_low_quality=False, # Whether to match the boxes under low quality (see API doc for more details).
ignore_iof_thr=-1), # IoF threshold for ignoring bboxes
sampler=dict(
- type='RandomSampler', # Type of sampler, PseudoSampler and other samplers are also supported. Refer to https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/samplers/random_sampler.py#L14 for implementation details.
+ type='RandomSampler', # Type of sampler, PseudoSampler and other samplers are also supported. Refer to https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/samplers/random_sampler.py#L14 for implementation details.
num=512, # Number of samples
pos_fraction=0.25, # The ratio of positive samples in the total samples.
neg_pos_ub=-1, # The upper bound of negative samples based on the number of positive samples.
diff --git a/docs/en/user_guides/deploy.md b/docs/en/user_guides/deploy.md
index ab525c278fd..94c078882e3 100644
--- a/docs/en/user_guides/deploy.md
+++ b/docs/en/user_guides/deploy.md
@@ -15,7 +15,7 @@ This tutorial is organized as follows:
## Installation
-Please follow the [guide](https://mmdetection.readthedocs.io/en/3.x/get_started.html) to install mmdet. And then install mmdeploy from source by following [this](https://mmdeploy.readthedocs.io/en/1.x/get_started.html#installation) guide.
+Please follow the [guide](https://mmdetection.readthedocs.io/en/latest/get_started.html) to install mmdet. And then install mmdeploy from source by following [this](https://mmdeploy.readthedocs.io/en/1.x/get_started.html#installation) guide.
```{note}
If you install mmdeploy prebuilt package, please also clone its repository by 'git clone https://github.com/open-mmlab/mmdeploy.git --depth=1' to get the deployment config files.
@@ -25,7 +25,7 @@ If you install mmdeploy prebuilt package, please also clone its repository by 'g
Suppose mmdetection and mmdeploy repositories are in the same directory, and the working directory is the root path of mmdetection.
-Take [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py) model as an example. You can download its checkpoint from [here](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth), and then convert it to onnx model as follows:
+Take [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py) model as an example. You can download its checkpoint from [here](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth), and then convert it to onnx model as follows:
```python
from mmdeploy.apis import torch2onnx
diff --git a/docs/en/user_guides/inference.md b/docs/en/user_guides/inference.md
index 59a963d5d0f..33257ed5ed4 100644
--- a/docs/en/user_guides/inference.md
+++ b/docs/en/user_guides/inference.md
@@ -3,9 +3,9 @@
MMDetection provides hundreds of pre-trained detection models in [Model Zoo](https://mmdetection.readthedocs.io/en/latest/model_zoo.html).
This note will show how to inference, which means using trained models to detect objects on images.
-In MMDetection, a model is defined by a [configuration file](https://mmdetection.readthedocs.io/en/3.x/user_guides/config.html) and existing model parameters are saved in a checkpoint file.
+In MMDetection, a model is defined by a [configuration file](https://mmdetection.readthedocs.io/en/latest/user_guides/config.html) and existing model parameters are saved in a checkpoint file.
-To start with, we recommend [RTMDet](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet) with this [configuration file](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/rtmdet/rtmdet_l_8xb32-300e_coco.py) and this [checkpoint file](https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_l_8xb32-300e_coco/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth). It is recommended to download the checkpoint file to `checkpoints` directory.
+To start with, we recommend [RTMDet](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet) with this [configuration file](https://github.com/open-mmlab/mmdetection/blob/main/configs/rtmdet/rtmdet_l_8xb32-300e_coco.py) and this [checkpoint file](https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_l_8xb32-300e_coco/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth). It is recommended to download the checkpoint file to `checkpoints` directory.
## High-level APIs for inference
@@ -84,14 +84,14 @@ for frame in track_iter_progress(video_reader):
cv2.destroyAllWindows()
```
-A notebook demo can be found in [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/3.x/demo/inference_demo.ipynb).
+A notebook demo can be found in [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/main/demo/inference_demo.ipynb).
Note: `inference_detector` only supports single-image inference for now.
## Demos
We also provide three demo scripts, implemented with high-level APIs and supporting functionality codes.
-Source codes are available [here](https://github.com/open-mmlab/mmdetection/blob/3.x/demo).
+Source codes are available [here](https://github.com/open-mmlab/mmdetection/blob/main/demo).
### Image demo
diff --git a/docs/en/user_guides/label_studio.md b/docs/en/user_guides/label_studio.md
index 07a1e84a2e2..c86a77d9c22 100644
--- a/docs/en/user_guides/label_studio.md
+++ b/docs/en/user_guides/label_studio.md
@@ -41,7 +41,7 @@ mim install "mmcv>=2.0.0rc0"
Install MMDetection:
```shell
-git clone https://github.com/open-mmlab/mmdetection -b dev-3.x
+git clone https://github.com/open-mmlab/mmdetection
cd mmdetection
pip install -v -e .
```
diff --git a/docs/en/user_guides/test.md b/docs/en/user_guides/test.md
index 302dd5949c2..a7855e10ec7 100644
--- a/docs/en/user_guides/test.md
+++ b/docs/en/user_guides/test.md
@@ -55,7 +55,7 @@ Optional arguments:
Assuming that you have already downloaded the checkpoints to the directory `checkpoints/`.
1. Test RTMDet and visualize the results. Press any key for the next image.
- Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet).
+ Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet).
```shell
python tools/test.py \
@@ -65,7 +65,7 @@ Assuming that you have already downloaded the checkpoints to the directory `chec
```
2. Test RTMDet and save the painted images for future visualization.
- Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet).
+ Config and checkpoint files are available [here](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet).
```shell
python tools/test.py \
diff --git a/docs/en/user_guides/train.md b/docs/en/user_guides/train.md
index ec8181e8617..071a0b99720 100644
--- a/docs/en/user_guides/train.md
+++ b/docs/en/user_guides/train.md
@@ -436,7 +436,7 @@ To train a model with the new config, you can simply run
python tools/train.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py
```
-For more detailed usages, please refer to the [training guide](https://mmdetection.readthedocs.io/en/3.x/user_guides/train.html#train-predefined-models-on-standard-datasets).
+For more detailed usages, please refer to the [training guide](https://mmdetection.readthedocs.io/en/latest/user_guides/train.html#train-predefined-models-on-standard-datasets).
## Test and inference
@@ -446,4 +446,4 @@ To test the trained model, you can simply run
python tools/test.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py work_dirs/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon/epoch_12.pth
```
-For more detailed usages, please refer to the [testing guide](https://mmdetection.readthedocs.io/en/3.x/user_guides/test.html).
+For more detailed usages, please refer to the [testing guide](https://mmdetection.readthedocs.io/en/latest/user_guides/test.html).
diff --git a/docs/en/user_guides/useful_hooks.md b/docs/en/user_guides/useful_hooks.md
index 13b6bcf5846..4c30686d68a 100644
--- a/docs/en/user_guides/useful_hooks.md
+++ b/docs/en/user_guides/useful_hooks.md
@@ -8,7 +8,7 @@ MMDetection and MMEngine provide users with various useful hooks including log h
## MemoryProfilerHook
-[Memory profiler hook](https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/engine/hooks/memory_profiler_hook.py) records memory information including virtual memory, swap memory, and the memory of the current process. This hook helps grasp the memory usage of the system and discover potential memory leak bugs. To use this hook, users should install `memory_profiler` and `psutil` by `pip install memory_profiler psutil` first.
+[Memory profiler hook](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/memory_profiler_hook.py) records memory information including virtual memory, swap memory, and the memory of the current process. This hook helps grasp the memory usage of the system and discover potential memory leak bugs. To use this hook, users should install `memory_profiler` and `psutil` by `pip install memory_profiler psutil` first.
### Usage
diff --git a/docs/en/user_guides/useful_tools.md b/docs/en/user_guides/useful_tools.md
index 5cce0cb97e6..007d367ec8c 100644
--- a/docs/en/user_guides/useful_tools.md
+++ b/docs/en/user_guides/useful_tools.md
@@ -308,7 +308,7 @@ comparisons, but double check it before you adopt it in technical reports or pap
1. FLOPs are related to the input shape while parameters are not. The default
input shape is (1, 3, 1280, 800).
-2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/dev-3.x/mmcv/cnn/utils/flops_counter.py) for details.
+2. Some operators are not counted into FLOPs like GN and custom operators. Refer to [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/2.x/mmcv/cnn/utils/flops_counter.py) for details.
3. The FLOPs of two-stage detectors is dependent on the number of proposals.
## Model conversion
diff --git a/docs/zh_cn/advanced_guides/customize_dataset.md b/docs/zh_cn/advanced_guides/customize_dataset.md
index e2fee435080..e845f37f2db 100644
--- a/docs/zh_cn/advanced_guides/customize_dataset.md
+++ b/docs/zh_cn/advanced_guides/customize_dataset.md
@@ -174,7 +174,7 @@ model = dict(
]
```
-我们使用这种方式来支持 CityScapes 数据集。脚本在 [cityscapes.py](https://github.com/open-mmlab/mmdetection/blob/3.x/tools/dataset_converters/cityscapes.py) 并且我们提供了微调的 [configs](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/cityscapes).
+我们使用这种方式来支持 CityScapes 数据集。脚本在 [cityscapes.py](https://github.com/open-mmlab/mmdetection/blob/main/tools/dataset_converters/cityscapes.py) 并且我们提供了微调的 [configs](https://github.com/open-mmlab/mmdetection/blob/main/configs/cityscapes).
**注意**
@@ -236,7 +236,7 @@ model = dict(
有些数据集可能会提供如:crowd/difficult/ignored bboxes 标注,那么我们使用 `ignore_flag`来包含它们。
-在得到上述标准的数据标注格式后,可以直接在配置中使用 MMDetection 的 [BaseDetDataset](https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/datasets/base_det_dataset.py#L13) ,而无需进行转换。
+在得到上述标准的数据标注格式后,可以直接在配置中使用 MMDetection 的 [BaseDetDataset](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/datasets/base_det_dataset.py#L13) ,而无需进行转换。
### 自定义数据集例子
@@ -351,7 +351,7 @@ test_dataloader = dict(
- 在 MMDetection v2.5.0 之前,如果类别为集合时数据集将自动过滤掉不包含 GT 的图片,且没办法通过修改配置将其关闭。这是一种不可取的行为而且会引起混淆,因为当类别不是集合时数据集时,只有在 `filter_empty_gt=True` 以及 `test_mode=False` 的情况下才会过滤掉不包含 GT 的图片。在 MMDetection v2.5.0 之后,我们将图片的过滤以及类别的修改进行解耦,数据集只有在 `filter_cfg=dict(filter_empty_gt=True)` 和 `test_mode=False` 的情况下才会过滤掉不包含 GT 的图片,无论类别是否为集合。设置类别只会影响用于训练的标注类别,用户可以自行决定是否过滤不包含 GT 的图片。
- 直接使用 MMEngine 中的 `BaseDataset` 或者 MMDetection 中的 `BaseDetDataset` 时用户不能通过修改配置来过滤不含 GT 的图片,但是可以通过离线的方式来解决。
-- 当设置数据集中的 `classes` 时,记得修改 `num_classes`。从 v2.9.0 (PR#4508) 之后,我们实现了 [NumClassCheckHook](https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/engine/hooks/num_class_check_hook.py) 来检查类别数是否一致。
+- 当设置数据集中的 `classes` 时,记得修改 `num_classes`。从 v2.9.0 (PR#4508) 之后,我们实现了 [NumClassCheckHook](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/num_class_check_hook.py) 来检查类别数是否一致。
## COCO 全景分割数据集
diff --git a/docs/zh_cn/advanced_guides/customize_losses.md b/docs/zh_cn/advanced_guides/customize_losses.md
index e9f0ac83978..07ccccda128 100644
--- a/docs/zh_cn/advanced_guides/customize_losses.md
+++ b/docs/zh_cn/advanced_guides/customize_losses.md
@@ -39,7 +39,7 @@ train_cfg=dict(
## 微调损失
-微调一个损失主要与步骤 2,4,5 有关,大部分的修改可以在配置文件中指定。这里我们用 [Focal Loss (FL)](https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/losses/focal_loss.py) 作为例子。
+微调一个损失主要与步骤 2,4,5 有关,大部分的修改可以在配置文件中指定。这里我们用 [Focal Loss (FL)](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/losses/focal_loss.py) 作为例子。
下面的代码分别是构建 FL 的方法和它的配置文件,他们是一一对应的。
```python
@@ -105,7 +105,7 @@ loss_cls=dict(
## 加权损失(步骤3)
-加权损失就是我们逐元素修改损失权重。更具体来说,我们给损失张量乘以一个与他有相同形状的权重张量。所以,损失中不同的元素可以被赋予不同的比例,所以这里叫做逐元素。损失的权重在不同模型中变化很大,而且与上下文相关,但是总的来说主要有两种损失权重:分类损失的 `label_weights` 和边界框的 `bbox_weights`。你可以在相应的头中的 `get_target` 方法中找到他们。这里我们使用 [ATSSHead](https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/dense_heads/atss_head.py#L322) 作为一个例子。它继承了 [AnchorHead](https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/dense_heads/anchor_head.py) ,但是我们重写它的
+加权损失就是我们逐元素修改损失权重。更具体来说,我们给损失张量乘以一个与他有相同形状的权重张量。所以,损失中不同的元素可以被赋予不同的比例,所以这里叫做逐元素。损失的权重在不同模型中变化很大,而且与上下文相关,但是总的来说主要有两种损失权重:分类损失的 `label_weights` 和边界框的 `bbox_weights`。你可以在相应的头中的 `get_target` 方法中找到他们。这里我们使用 [ATSSHead](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/dense_heads/atss_head.py#L322) 作为一个例子。它继承了 [AnchorHead](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/dense_heads/anchor_head.py) ,但是我们重写它的
`get_targets` 方法来产生不同的 `label_weights` 和 `bbox_weights`。
```
diff --git a/docs/zh_cn/advanced_guides/customize_runtime.md b/docs/zh_cn/advanced_guides/customize_runtime.md
index 1f953eb41ed..d4a19098789 100644
--- a/docs/zh_cn/advanced_guides/customize_runtime.md
+++ b/docs/zh_cn/advanced_guides/customize_runtime.md
@@ -330,9 +330,9 @@ custom_hooks = [
#### 例子: `NumClassCheckHook`
-我们实现了一个名为 [NumClassCheckHook](https://github.com/open-mmlab/mmdetection/blob/dev-3.x/mmdet/engine/hooks/num_class_check_hook.py) 的自定义钩子来检查 `num_classes` 是否在 head 中和 `dataset` 中的 `classes` 的长度相匹配。
+我们实现了一个名为 [NumClassCheckHook](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/num_class_check_hook.py) 的自定义钩子来检查 `num_classes` 是否在 head 中和 `dataset` 中的 `classes` 的长度相匹配。
-我们在 [default_runtime.py](https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/default_runtime.py) 中设置它。
+我们在 [default_runtime.py](https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/default_runtime.py) 中设置它。
```python
custom_hooks = [dict(type='NumClassCheckHook')]
diff --git a/docs/zh_cn/advanced_guides/how_to.md b/docs/zh_cn/advanced_guides/how_to.md
index 64b03ffba17..8fede40cfd3 100644
--- a/docs/zh_cn/advanced_guides/how_to.md
+++ b/docs/zh_cn/advanced_guides/how_to.md
@@ -34,10 +34,10 @@ model = dict(
### 通过 MMClassification 使用 TIMM 中实现的骨干网络
-由于 MMClassification 提供了 Py**T**orch **Im**age **M**odels (`timm`) 骨干网络的封装,用户也可以通过 MMClassification 直接使用 `timm` 中的骨干网络。假设想将 [`EfficientNet-B1`](https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/timm_example/retinanet_timm-efficientnet-b1_fpn_1x_coco.py) 作为 `RetinaNet` 的骨干网络,则配置文件如下。
+由于 MMClassification 提供了 Py**T**orch **Im**age **M**odels (`timm`) 骨干网络的封装,用户也可以通过 MMClassification 直接使用 `timm` 中的骨干网络。假设想将 [`EfficientNet-B1`](../../../configs/timm_example/retinanet_timm-efficientnet-b1_fpn_1x_coco.py) 作为 `RetinaNet` 的骨干网络,则配置文件如下。
```python
-# https://github.com/open-mmlab/mmdetection/blob/master/configs/timm_example/retinanet_timm_efficientnet_b1_fpn_1x_coco.py
+# https://github.com/open-mmlab/mmdetection/blob/main/configs/timm_example/retinanet_timm_efficientnet_b1_fpn_1x_coco.py
_base_ = [
'../_base_/models/retinanet_r50_fpn.py',
'../_base_/datasets/coco_detection.py',
diff --git a/docs/zh_cn/conf.py b/docs/zh_cn/conf.py
index 1bb57a4a31b..e6878408971 100644
--- a/docs/zh_cn/conf.py
+++ b/docs/zh_cn/conf.py
@@ -67,7 +67,7 @@ def get_version():
'.md': 'markdown',
}
-# The master toctree document.
+# The main toctree document.
master_doc = 'index'
# List of patterns, relative to source directory, that match files and
diff --git a/docs/zh_cn/get_started.md b/docs/zh_cn/get_started.md
index d1898749eed..d9b1380234f 100644
--- a/docs/zh_cn/get_started.md
+++ b/docs/zh_cn/get_started.md
@@ -54,8 +54,7 @@ mim install "mmcv>=2.0.0rc1"
方案 a:如果你开发并直接运行 mmdet,从源码安装它:
```shell
-git clone https://github.com/open-mmlab/mmdetection.git -b 3.x
-# "-b 3.x" 表示切换到 `3.x` 分支。
+git clone https://github.com/open-mmlab/mmdetection.git
cd mmdetection
pip install -v -e .
# "-v" 指详细说明,或更多的输出
@@ -65,7 +64,7 @@ pip install -v -e .
方案 b:如果你将 mmdet 作为依赖或第三方 Python 包,使用 MIM 安装:
```shell
-mim install "mmdet>=3.0.0rc0"
+mim install mmdet
```
## 验证安装
@@ -183,7 +182,7 @@ MMDetection 可以在 CPU 环境中构建。在 CPU 模式下,可以进行模
**步骤 2.** 使用源码安装 MMDetection。
```shell
-!git clone https://github.com/open-mmlab/mmdetection.git -b 3.x
+!git clone https://github.com/open-mmlab/mmdetection.git
%cd mmdetection
!pip install -e .
```
diff --git a/docs/zh_cn/model_zoo.md b/docs/zh_cn/model_zoo.md
index afa74505861..b5376152d9c 100644
--- a/docs/zh_cn/model_zoo.md
+++ b/docs/zh_cn/model_zoo.md
@@ -10,7 +10,7 @@
- 我们使用分布式训练。
- 所有 pytorch-style 的 ImageNet 预训练主干网络来自 PyTorch 的模型库,caffe-style 的预训练主干网络来自 detectron2 最新开源的模型。
- 为了与其他代码库公平比较,文档中所写的 GPU 内存是8个 GPU 的 `torch.cuda.max_memory_allocated()` 的最大值,此值通常小于 nvidia-smi 显示的值。
-- 我们以网络 forward 和后处理的时间加和作为推理时间,不包含数据加载时间。所有结果通过 [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/benchmark.py) 脚本计算所得。该脚本会计算推理 2000 张图像的平均时间。
+- 我们以网络 forward 和后处理的时间加和作为推理时间,不包含数据加载时间。所有结果通过 [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/main/tools/analysis_tools/benchmark.py) 脚本计算所得。该脚本会计算推理 2000 张图像的平均时间。
## ImageNet 预训练模型
@@ -37,223 +37,223 @@ MMdetection 常用到的主干网络细节如下表所示:
### RPN
-请参考 [RPN](https://github.com/open-mmlab/mmdetection/blob/master/configs/rpn)。
+请参考 [RPN](https://github.com/open-mmlab/mmdetection/blob/main/configs/rpn)。
### Faster R-CNN
-请参考 [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn)。
+请参考 [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/faster_rcnn)。
### Mask R-CNN
-请参考 [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn)。
+请参考 [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask_rcnn)。
### Fast R-CNN (使用提前计算的 proposals)
-请参考 [Fast R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/fast_rcnn)。
+请参考 [Fast R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/fast_rcnn)。
### RetinaNet
-请参考 [RetinaNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/retinanet)。
+请参考 [RetinaNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/retinanet)。
### Cascade R-CNN and Cascade Mask R-CNN
-请参考 [Cascade R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/cascade_rcnn)。
+请参考 [Cascade R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/cascade_rcnn)。
### Hybrid Task Cascade (HTC)
-请参考 [HTC](https://github.com/open-mmlab/mmdetection/blob/master/configs/htc)。
+请参考 [HTC](https://github.com/open-mmlab/mmdetection/blob/main/configs/htc)。
### SSD
-请参考 [SSD](https://github.com/open-mmlab/mmdetection/blob/master/configs/ssd)。
+请参考 [SSD](https://github.com/open-mmlab/mmdetection/blob/main/configs/ssd)。
### Group Normalization (GN)
-请参考 [Group Normalization](https://github.com/open-mmlab/mmdetection/blob/master/configs/gn)。
+请参考 [Group Normalization](https://github.com/open-mmlab/mmdetection/blob/main/configs/gn)。
### Weight Standardization
-请参考 [Weight Standardization](https://github.com/open-mmlab/mmdetection/blob/master/configs/gn+ws)。
+请参考 [Weight Standardization](https://github.com/open-mmlab/mmdetection/blob/main/configs/gn+ws)。
### Deformable Convolution v2
-请参考 [Deformable Convolutional Networks](https://github.com/open-mmlab/mmdetection/blob/master/configs/dcn)。
+请参考 [Deformable Convolutional Networks](https://github.com/open-mmlab/mmdetection/blob/main/configs/dcn)。
### CARAFE: Content-Aware ReAssembly of FEatures
-请参考 [CARAFE](https://github.com/open-mmlab/mmdetection/blob/master/configs/carafe)。
+请参考 [CARAFE](https://github.com/open-mmlab/mmdetection/blob/main/configs/carafe)。
### Instaboost
-请参考 [Instaboost](https://github.com/open-mmlab/mmdetection/blob/master/configs/instaboost)。
+请参考 [Instaboost](https://github.com/open-mmlab/mmdetection/blob/main/configs/instaboost)。
### Libra R-CNN
-请参考 [Libra R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/libra_rcnn)。
+请参考 [Libra R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/libra_rcnn)。
### Guided Anchoring
-请参考 [Guided Anchoring](https://github.com/open-mmlab/mmdetection/blob/master/configs/guided_anchoring)。
+请参考 [Guided Anchoring](https://github.com/open-mmlab/mmdetection/blob/main/configs/guided_anchoring)。
### FCOS
-请参考 [FCOS](https://github.com/open-mmlab/mmdetection/blob/master/configs/fcos)。
+请参考 [FCOS](https://github.com/open-mmlab/mmdetection/blob/main/configs/fcos)。
### FoveaBox
-请参考 [FoveaBox](https://github.com/open-mmlab/mmdetection/blob/master/configs/foveabox)。
+请参考 [FoveaBox](https://github.com/open-mmlab/mmdetection/blob/main/configs/foveabox)。
### RepPoints
-请参考 [RepPoints](https://github.com/open-mmlab/mmdetection/blob/master/configs/reppoints)。
+请参考 [RepPoints](https://github.com/open-mmlab/mmdetection/blob/main/configs/reppoints)。
### FreeAnchor
-请参考 [FreeAnchor](https://github.com/open-mmlab/mmdetection/blob/master/configs/free_anchor)。
+请参考 [FreeAnchor](https://github.com/open-mmlab/mmdetection/blob/main/configs/free_anchor)。
### Grid R-CNN (plus)
-请参考 [Grid R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/grid_rcnn)。
+请参考 [Grid R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/grid_rcnn)。
### GHM
-请参考 [GHM](https://github.com/open-mmlab/mmdetection/blob/master/configs/ghm)。
+请参考 [GHM](https://github.com/open-mmlab/mmdetection/blob/main/configs/ghm)。
### GCNet
-请参考 [GCNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/gcnet)。
+请参考 [GCNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/gcnet)。
### HRNet
-请参考 [HRNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/hrnet)。
+请参考 [HRNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/hrnet)。
### Mask Scoring R-CNN
-请参考 [Mask Scoring R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/ms_rcnn)。
+请参考 [Mask Scoring R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/ms_rcnn)。
### Train from Scratch
-请参考 [Rethinking ImageNet Pre-training](https://github.com/open-mmlab/mmdetection/blob/master/configs/scratch)。
+请参考 [Rethinking ImageNet Pre-training](https://github.com/open-mmlab/mmdetection/blob/main/configs/scratch)。
### NAS-FPN
-请参考 [NAS-FPN](https://github.com/open-mmlab/mmdetection/blob/master/configs/nas_fpn)。
+请参考 [NAS-FPN](https://github.com/open-mmlab/mmdetection/blob/main/configs/nas_fpn)。
### ATSS
-请参考 [ATSS](https://github.com/open-mmlab/mmdetection/blob/master/configs/atss)。
+请参考 [ATSS](https://github.com/open-mmlab/mmdetection/blob/main/configs/atss)。
### FSAF
-请参考 [FSAF](https://github.com/open-mmlab/mmdetection/blob/master/configs/fsaf)。
+请参考 [FSAF](https://github.com/open-mmlab/mmdetection/blob/main/configs/fsaf)。
### RegNetX
-请参考 [RegNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/regnet)。
+请参考 [RegNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/regnet)。
### Res2Net
-请参考 [Res2Net](https://github.com/open-mmlab/mmdetection/blob/master/configs/res2net)。
+请参考 [Res2Net](https://github.com/open-mmlab/mmdetection/blob/main/configs/res2net)。
### GRoIE
-请参考 [GRoIE](https://github.com/open-mmlab/mmdetection/blob/master/configs/groie)。
+请参考 [GRoIE](https://github.com/open-mmlab/mmdetection/blob/main/configs/groie)。
### Dynamic R-CNN
-请参考 [Dynamic R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/dynamic_rcnn)。
+请参考 [Dynamic R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/dynamic_rcnn)。
### PointRend
-请参考 [PointRend](https://github.com/open-mmlab/mmdetection/blob/master/configs/point_rend)。
+请参考 [PointRend](https://github.com/open-mmlab/mmdetection/blob/main/configs/point_rend)。
### DetectoRS
-请参考 [DetectoRS](https://github.com/open-mmlab/mmdetection/blob/master/configs/detectors)。
+请参考 [DetectoRS](https://github.com/open-mmlab/mmdetection/blob/main/configs/detectors)。
### Generalized Focal Loss
-请参考 [Generalized Focal Loss](https://github.com/open-mmlab/mmdetection/blob/master/configs/gfl)。
+请参考 [Generalized Focal Loss](https://github.com/open-mmlab/mmdetection/blob/main/configs/gfl)。
### CornerNet
-请参考 [CornerNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/cornernet)。
+请参考 [CornerNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/cornernet)。
### YOLOv3
-请参考 [YOLOv3](https://github.com/open-mmlab/mmdetection/blob/master/configs/yolo)。
+请参考 [YOLOv3](https://github.com/open-mmlab/mmdetection/blob/main/configs/yolo)。
### PAA
-请参考 [PAA](https://github.com/open-mmlab/mmdetection/blob/master/configs/paa)。
+请参考 [PAA](https://github.com/open-mmlab/mmdetection/blob/main/configs/paa)。
### SABL
-请参考 [SABL](https://github.com/open-mmlab/mmdetection/blob/master/configs/sabl)。
+请参考 [SABL](https://github.com/open-mmlab/mmdetection/blob/main/configs/sabl)。
### CentripetalNet
-请参考 [CentripetalNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/centripetalnet)。
+请参考 [CentripetalNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/centripetalnet)。
### ResNeSt
-请参考 [ResNeSt](https://github.com/open-mmlab/mmdetection/blob/master/configs/resnest)。
+请参考 [ResNeSt](https://github.com/open-mmlab/mmdetection/blob/main/configs/resnest)。
### DETR
-请参考 [DETR](https://github.com/open-mmlab/mmdetection/blob/master/configs/detr)。
+请参考 [DETR](https://github.com/open-mmlab/mmdetection/blob/main/configs/detr)。
### Deformable DETR
-请参考 [Deformable DETR](https://github.com/open-mmlab/mmdetection/blob/master/configs/deformable_detr)。
+请参考 [Deformable DETR](https://github.com/open-mmlab/mmdetection/blob/main/configs/deformable_detr)。
### AutoAssign
-请参考 [AutoAssign](https://github.com/open-mmlab/mmdetection/blob/master/configs/autoassign)。
+请参考 [AutoAssign](https://github.com/open-mmlab/mmdetection/blob/main/configs/autoassign)。
### YOLOF
-请参考 [YOLOF](https://github.com/open-mmlab/mmdetection/blob/master/configs/yolof)。
+请参考 [YOLOF](https://github.com/open-mmlab/mmdetection/blob/main/configs/yolof)。
### Seesaw Loss
-请参考 [Seesaw Loss](https://github.com/open-mmlab/mmdetection/blob/master/configs/seesaw_loss)。
+请参考 [Seesaw Loss](https://github.com/open-mmlab/mmdetection/blob/main/configs/seesaw_loss)。
### CenterNet
-请参考 [CenterNet](https://github.com/open-mmlab/mmdetection/blob/master/configs/centernet)。
+请参考 [CenterNet](https://github.com/open-mmlab/mmdetection/blob/main/configs/centernet)。
### YOLOX
-请参考 [YOLOX](https://github.com/open-mmlab/mmdetection/blob/master/configs/yolox)。
+请参考 [YOLOX](https://github.com/open-mmlab/mmdetection/blob/main/configs/yolox)。
### PVT
-请参考 [PVT](https://github.com/open-mmlab/mmdetection/blob/master/configs/pvt)。
+请参考 [PVT](https://github.com/open-mmlab/mmdetection/blob/main/configs/pvt)。
### SOLO
-请参考 [SOLO](https://github.com/open-mmlab/mmdetection/blob/master/configs/solo)。
+请参考 [SOLO](https://github.com/open-mmlab/mmdetection/blob/main/configs/solo)。
### QueryInst
-请参考 [QueryInst](https://github.com/open-mmlab/mmdetection/blob/master/configs/queryinst)。
+请参考 [QueryInst](https://github.com/open-mmlab/mmdetection/blob/main/configs/queryinst)。
### Other datasets
-我们还在 [PASCAL VOC](https://github.com/open-mmlab/mmdetection/blob/master/configs/pascal_voc),[Cityscapes](https://github.com/open-mmlab/mmdetection/blob/master/configs/cityscapes) 和 [WIDER FACE](https://github.com/open-mmlab/mmdetection/blob/master/configs/wider_face) 上对一些方法进行了基准测试。
+我们还在 [PASCAL VOC](https://github.com/open-mmlab/mmdetection/blob/main/configs/pascal_voc),[Cityscapes](https://github.com/open-mmlab/mmdetection/blob/main/configs/cityscapes) 和 [WIDER FACE](https://github.com/open-mmlab/mmdetection/blob/main/configs/wider_face) 上对一些方法进行了基准测试。
### Pre-trained Models
-我们还通过多尺度训练和更长的训练策略来训练用 ResNet-50 和 [RegNetX-3.2G](https://github.com/open-mmlab/mmdetection/blob/master/configs/regnet) 作为主干网络的 [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn) 和 [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn)。这些模型可以作为下游任务的预训练模型。
+我们还通过多尺度训练和更长的训练策略来训练用 ResNet-50 和 [RegNetX-3.2G](https://github.com/open-mmlab/mmdetection/blob/main/configs/regnet) 作为主干网络的 [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/faster_rcnn) 和 [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask_rcnn)。这些模型可以作为下游任务的预训练模型。
## 速度基准
### 训练速度基准
-我们提供 [analyze_logs.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/analyze_logs.py) 来得到训练中每一次迭代的平均时间。示例请参考 [Log Analysis](https://mmdetection.readthedocs.io/en/latest/useful_tools.html#log-analysis)。
+我们提供 [analyze_logs.py](https://github.com/open-mmlab/mmdetection/blob/main/tools/analysis_tools/analyze_logs.py) 来得到训练中每一次迭代的平均时间。示例请参考 [Log Analysis](https://mmdetection.readthedocs.io/en/latest/useful_tools.html#log-analysis)。
-我们与其他流行框架的 Mask R-CNN 训练速度进行比较(数据是从 [detectron2](https://github.com/facebookresearch/detectron2/blob/master/docs/notes/benchmarks.md/) 复制而来)。在 mmdetection 中,我们使用 [mask_rcnn_r50_caffe_fpn_poly_1x_coco_v1.py](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_poly_1x_coco_v1.py) 进行基准测试。它与 detectron2 的 [mask_rcnn_R_50_FPN_noaug_1x.yaml](https://github.com/facebookresearch/detectron2/blob/master/configs/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x.yaml) 设置完全一样。同时,我们还提供了[模型权重](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug_compare_20200518-10127928.pth)和[训练 log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug_20200518_105755.log.json) 作为参考。为了跳过 GPU 预热时间,吞吐量按照100-500次迭代之间的平均吞吐量来计算。
+我们与其他流行框架的 Mask R-CNN 训练速度进行比较(数据是从 [detectron2](https://github.com/facebookresearch/detectron2/blob/main/docs/notes/benchmarks.md/) 复制而来)。在 mmdetection 中,我们使用 [mask-rcnn_r50-caffe_fpn_poly-1x_coco_v1.py](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask_rcnn/mask-rcnn_r50-caffe_fpn_poly-1x_coco_v1.py) 进行基准测试。它与 detectron2 的 [mask_rcnn_R_50_FPN_noaug_1x.yaml](https://github.com/facebookresearch/detectron2/blob/main/configs/Detectron1-Comparisons/mask_rcnn_R_50_FPN_noaug_1x.yaml) 设置完全一样。同时,我们还提供了[模型权重](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug_compare_20200518-10127928.pth)和[训练 log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug/mask_rcnn_r50_caffe_fpn_poly_1x_coco_no_aug_20200518_105755.log.json) 作为参考。为了跳过 GPU 预热时间,吞吐量按照100-500次迭代之间的平均吞吐量来计算。
| 框架 | 吞吐量 (img/s) |
| -------------------------------------------------------------------------------------- | -------------- |
@@ -267,7 +267,7 @@ MMdetection 常用到的主干网络细节如下表所示:
### 推理时间基准
-我们提供 [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/master/tools/analysis_tools/benchmark.py) 对推理时间进行基准测试。此脚本将推理 2000 张图片并计算忽略前 5 次推理的平均推理时间。可以通过设置 `LOG-INTERVAL` 来改变 log 输出间隔(默认为 50)。
+我们提供 [benchmark.py](https://github.com/open-mmlab/mmdetection/blob/main/tools/analysis_tools/benchmark.py) 对推理时间进行基准测试。此脚本将推理 2000 张图片并计算忽略前 5 次推理的平均推理时间。可以通过设置 `LOG-INTERVAL` 来改变 log 输出间隔(默认为 50)。
```shell
python tools/benchmark.py ${CONFIG} ${CHECKPOINT} [--log-interval $[LOG-INTERVAL]] [--fuse-conv-bn]
@@ -295,11 +295,11 @@ python tools/benchmark.py ${CONFIG} ${CHECKPOINT} [--log-interval $[LOG-INTERVAL
### 精度
-| 模型 | 训练策略 | Detectron2 | mmdetection | 下载 |
-| -------------------------------------------------------------------------------------------------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/faster_rcnn/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco.py) | 1x | [37.9](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml) | 38.0 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-5324cff8.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco_20200429_234554.log.json) |
-| [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/master/configs/mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco.py) | 1x | [38.6 & 35.2](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml) | 38.8 & 35.4 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco-dbecf295.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco_20200430_054239.log.json) |
-| [Retinanet](https://github.com/open-mmlab/mmdetection/blob/master/configs/retinanet/retinanet_r50_caffe_fpn_mstrain_1x_coco.py) | 1x | [36.5](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-Detection/retinanet_R_50_FPN_1x.yaml) | 37.0 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/retinanet_r50_caffe_fpn_mstrain_1x_coco/retinanet_r50_caffe_fpn_mstrain_1x_coco-586977a0.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/retinanet_r50_caffe_fpn_mstrain_1x_coco/retinanet_r50_caffe_fpn_mstrain_1x_coco_20200430_014748.log.json) |
+| 模型 | 训练策略 | Detectron2 | mmdetection | 下载 |
+| ------------------------------------------------------------------------------------------------------------------------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/faster_rcnn/faster-rcnn_r50-caffe_fpn_ms-1x_coco.py) | 1x | [37.9](https://github.com/facebookresearch/detectron2/blob/main/configs/COCO-Detection/faster_rcnn_R_50_FPN_1x.yaml) | 38.0 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco-5324cff8.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco/faster_rcnn_r50_caffe_fpn_mstrain_1x_coco_20200429_234554.log.json) |
+| [Mask R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/mask_rcnn/mask-rcnn_r50-caffe_fpn_ms-poly-1x_coco.py) | 1x | [38.6 & 35.2](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_1x.yaml) | 38.8 & 35.4 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco-dbecf295.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco/mask_rcnn_r50_caffe_fpn_mstrain-poly_1x_coco_20200430_054239.log.json) |
+| [Retinanet](https://github.com/open-mmlab/mmdetection/blob/main/configs/retinanet/retinanet_r50-caffe_fpn_ms-1x_coco.py) | 1x | [36.5](https://github.com/facebookresearch/detectron2/blob/master/configs/COCO-Detection/retinanet_R_50_FPN_1x.yaml) | 37.0 | [model](https://download.openmmlab.com/mmdetection/v2.0/benchmark/retinanet_r50_caffe_fpn_mstrain_1x_coco/retinanet_r50_caffe_fpn_mstrain_1x_coco-586977a0.pth) \| [log](https://download.openmmlab.com/mmdetection/v2.0/benchmark/retinanet_r50_caffe_fpn_mstrain_1x_coco/retinanet_r50_caffe_fpn_mstrain_1x_coco_20200430_014748.log.json) |
### 训练速度
diff --git a/docs/zh_cn/notes/faq.md b/docs/zh_cn/notes/faq.md
index dd2bbb7ee7b..a3695ae41de 100644
--- a/docs/zh_cn/notes/faq.md
+++ b/docs/zh_cn/notes/faq.md
@@ -1,6 +1,6 @@
# 常见问题解答
-我们在这里列出了使用时的一些常见问题及其相应的解决方案。 如果您发现有一些问题被遗漏,请随时提 PR 丰富这个列表。 如果您无法在此获得帮助,请使用 [issue模板](https://github.com/open-mmlab/mmdetection/blob/master/.github/ISSUE_TEMPLATE/error-report.md/)创建问题,但是请在模板中填写所有必填信息,这有助于我们更快定位问题。
+我们在这里列出了使用时的一些常见问题及其相应的解决方案。 如果您发现有一些问题被遗漏,请随时提 PR 丰富这个列表。 如果您无法在此获得帮助,请使用 [issue模板](https://github.com/open-mmlab/mmdetection/blob/main/.github/ISSUE_TEMPLATE/error-report.md/)创建问题,但是请在模板中填写所有必填信息,这有助于我们更快定位问题。
## PyTorch 2.0 支持
@@ -46,7 +46,8 @@ export DYNAMO_CACHE_SIZE_LIMIT = 4
| MMDetection 版本 | MMCV 版本 | MMEngine 版本 |
| :--------------: | :---------------------: | :----------------------: |
- | 3.x | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.6.0, \<1.0.0 |
+ | main | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 |
+ | 3.x | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.7.1, \<1.0.0 |
| 3.0.0rc6 | mmcv>=2.0.0rc4, \<2.1.0 | mmengine>=0.6.0, \<1.0.0 |
| 3.0.0rc5 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |
| 3.0.0rc4 | mmcv>=2.0.0rc1, \<2.1.0 | mmengine>=0.3.0, \<1.0.0 |
diff --git a/docs/zh_cn/overview.md b/docs/zh_cn/overview.md
index deaa5a2c173..5269aed896d 100644
--- a/docs/zh_cn/overview.md
+++ b/docs/zh_cn/overview.md
@@ -42,13 +42,13 @@ MMDetection 由 7 个主要部分组成,apis、structures、datasets、models
2. MMDetection 的基本使用方法请参考以下教程。
- - [训练和测试](https://mmdetection.readthedocs.io/zh_CN/dev-3.x/user_guides/index.html#train-test)
+ - [训练和测试](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/index.html#train-test)
- - [实用工具](https://mmdetection.readthedocs.io/zh_CN/dev-3.x/user_guides/index.html#useful-tools)
+ - [实用工具](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/index.html#useful-tools)
3. 参考以下教程深入了解:
- - [基础概念](https://mmdetection.readthedocs.io/zh_CN/dev-3.x/advanced_guides/index.html#basic-concepts)
- - [组件定制](https://mmdetection.readthedocs.io/zh_CN/dev-3.x/advanced_guides/index.html#component-customization)
+ - [基础概念](https://mmdetection.readthedocs.io/zh_CN/latest/advanced_guides/index.html#basic-concepts)
+ - [组件定制](https://mmdetection.readthedocs.io/zh_CN/latest/advanced_guides/index.html#component-customization)
4. 对于 MMDetection 2.x 版本的用户,我们提供了[迁移指南](./migration/migration.md),帮助您完成新版本的适配。
diff --git a/docs/zh_cn/stat.py b/docs/zh_cn/stat.py
index aa2c9de7398..1ea5fbd25b8 100755
--- a/docs/zh_cn/stat.py
+++ b/docs/zh_cn/stat.py
@@ -6,7 +6,7 @@
import numpy as np
-url_prefix = 'https://github.com/open-mmlab/mmdetection/blob/3.x/'
+url_prefix = 'https://github.com/open-mmlab/mmdetection/blob/main/'
files = sorted(glob.glob('../configs/*/README.md'))
diff --git a/docs/zh_cn/user_guides/config.md b/docs/zh_cn/user_guides/config.md
index 319c78ac312..3a670bf8ada 100644
--- a/docs/zh_cn/user_guides/config.md
+++ b/docs/zh_cn/user_guides/config.md
@@ -14,14 +14,14 @@ MMDetection 采用模块化设计,所有功能的模块都可以通过配置
model = dict(
type='MaskRCNN', # 检测器名
data_preprocessor=dict( # 数据预处理器的配置,通常包括图像归一化和 padding
- type='DetDataPreprocessor', # 数据预处理器的类型,参考 https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.data_preprocessors.DetDataPreprocessor
+ type='DetDataPreprocessor', # 数据预处理器的类型,参考 https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.data_preprocessors.DetDataPreprocessor
mean=[123.675, 116.28, 103.53], # 用于预训练骨干网络的图像归一化通道均值,按 R、G、B 排序
std=[58.395, 57.12, 57.375], # 用于预训练骨干网络的图像归一化通道标准差,按 R、G、B 排序
bgr_to_rgb=True, # 是否将图片通道从 BGR 转为 RGB
pad_mask=True, # 是否填充实例分割掩码
pad_size_divisor=32), # padding 后的图像的大小应该可以被 ``pad_size_divisor`` 整除
backbone=dict( # 主干网络的配置文件
- type='ResNet', # 主干网络的类别,可用选项请参考 https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.backbones.ResNet
+ type='ResNet', # 主干网络的类别,可用选项请参考 https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.backbones.ResNet
depth=50, # 主干网络的深度,对于 ResNet 和 ResNext 通常设置为 50 或 101
num_stages=4, # 主干网络状态(stages)的数目,这些状态产生的特征图作为后续的 head 的输入
out_indices=(0, 1, 2, 3), # 每个状态产生的特征图输出的索引
@@ -33,34 +33,34 @@ model = dict(
style='pytorch', # 主干网络的风格,'pytorch' 意思是步长为2的层为 3x3 卷积, 'caffe' 意思是步长为2的层为 1x1 卷积
init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), # 加载通过 ImageNet 预训练的模型
neck=dict(
- type='FPN', # 检测器的 neck 是 FPN,我们同样支持 'NASFPN', 'PAFPN' 等,更多细节可以参考 https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.necks.FPN
+ type='FPN', # 检测器的 neck 是 FPN,我们同样支持 'NASFPN', 'PAFPN' 等,更多细节可以参考 https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.necks.FPN
in_channels=[256, 512, 1024, 2048], # 输入通道数,这与主干网络的输出通道一致
out_channels=256, # 金字塔特征图每一层的输出通道
num_outs=5), # 输出的范围(scales)
rpn_head=dict(
- type='RPNHead', # rpn_head 的类型是 'RPNHead', 我们也支持 'GARPNHead' 等,更多细节可以参考 https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.dense_heads.RPNHead
+ type='RPNHead', # rpn_head 的类型是 'RPNHead', 我们也支持 'GARPNHead' 等,更多细节可以参考 https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.dense_heads.RPNHead
in_channels=256, # 每个输入特征图的输入通道,这与 neck 的输出通道一致
feat_channels=256, # head 卷积层的特征通道
anchor_generator=dict( # 锚点(Anchor)生成器的配置
- type='AnchorGenerator', # 大多数方法使用 AnchorGenerator 作为锚点生成器, SSD 检测器使用 `SSDAnchorGenerator`。更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/prior_generators/anchor_generator.py#L18
+ type='AnchorGenerator', # 大多数方法使用 AnchorGenerator 作为锚点生成器, SSD 检测器使用 `SSDAnchorGenerator`。更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/prior_generators/anchor_generator.py#L18
scales=[8], # 锚点的基本比例,特征图某一位置的锚点面积为 scale * base_sizes
ratios=[0.5, 1.0, 2.0], # 高度和宽度之间的比率
strides=[4, 8, 16, 32, 64]), # 锚生成器的步幅。这与 FPN 特征步幅一致。 如果未设置 base_sizes,则当前步幅值将被视为 base_sizes
bbox_coder=dict( # 在训练和测试期间对框进行编码和解码
- type='DeltaXYWHBBoxCoder', # 框编码器的类别,'DeltaXYWHBBoxCoder' 是最常用的,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/coders/delta_xywh_bbox_coder.py#L13
+ type='DeltaXYWHBBoxCoder', # 框编码器的类别,'DeltaXYWHBBoxCoder' 是最常用的,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/coders/delta_xywh_bbox_coder.py#L13
target_means=[0.0, 0.0, 0.0, 0.0], # 用于编码和解码框的目标均值
target_stds=[1.0, 1.0, 1.0, 1.0]), # 用于编码和解码框的标准差
loss_cls=dict( # 分类分支的损失函数配置
- type='CrossEntropyLoss', # 分类分支的损失类型,我们也支持 FocalLoss 等,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/losses/cross_entropy_loss.py#L201
+ type='CrossEntropyLoss', # 分类分支的损失类型,我们也支持 FocalLoss 等,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/losses/cross_entropy_loss.py#L201
use_sigmoid=True, # RPN 通常进行二分类,所以通常使用 sigmoid 函数
los_weight=1.0), # 分类分支的损失权重
loss_bbox=dict( # 回归分支的损失函数配置
- type='L1Loss', # 损失类型,我们还支持许多 IoU Losses 和 Smooth L1-loss 等,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/losses/smooth_l1_loss.py#L56
+ type='L1Loss', # 损失类型,我们还支持许多 IoU Losses 和 Smooth L1-loss 等,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/losses/smooth_l1_loss.py#L56
loss_weight=1.0)), # 回归分支的损失权重
roi_head=dict( # RoIHead 封装了两步(two-stage)/级联(cascade)检测器的第二步
- type='StandardRoIHead', # RoI head 的类型,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/roi_heads/standard_roi_head.py#L17
+ type='StandardRoIHead', # RoI head 的类型,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/roi_heads/standard_roi_head.py#L17
bbox_roi_extractor=dict( # 用于 bbox 回归的 RoI 特征提取器
- type='SingleRoIExtractor', # RoI 特征提取器的类型,大多数方法使用 SingleRoIExtractor,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py#L13
+ type='SingleRoIExtractor', # RoI 特征提取器的类型,大多数方法使用 SingleRoIExtractor,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/roi_heads/roi_extractors/single_level_roi_extractor.py#L13
roi_layer=dict( # RoI 层的配置
type='RoIAlign', # RoI 层的类别, 也支持 DeformRoIPoolingPack 和 ModulatedDeformRoIPoolingPack,更多细节请参考 https://mmcv.readthedocs.io/en/latest/api.html#mmcv.ops.RoIAlign
output_size=7, # 特征图的输出大小
@@ -68,7 +68,7 @@ model = dict(
out_channels=256, # 提取特征的输出通道
featmap_strides=[4, 8, 16, 32]), # 多尺度特征图的步幅,应该与主干的架构保持一致
bbox_head=dict( # RoIHead 中 box head 的配置
- type='Shared2FCBBoxHead', # bbox head 的类别,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py#L220
+ type='Shared2FCBBoxHead', # bbox head 的类别,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/roi_heads/bbox_heads/convfc_bbox_head.py#L220
in_channels=256, # bbox head 的输入通道。 这与 roi_extractor 中的 out_channels 一致
fc_out_channels=1024, # FC 层的输出特征通道
roi_feat_size=7, # 候选区域(Region of Interest)特征的大小
@@ -94,7 +94,7 @@ model = dict(
out_channels=256, # 提取特征的输出通道
featmap_strides=[4, 8, 16, 32]), # 多尺度特征图的步幅
mask_head=dict( # mask 预测 head 模型
- type='FCNMaskHead', # mask head 的类型,更多细节请参考 https://mmdetection.readthedocs.io/en/3.x/api.html#mmdet.models.roi_heads.FCNMaskHead
+ type='FCNMaskHead', # mask head 的类型,更多细节请参考 https://mmdetection.readthedocs.io/en/latest/api.html#mmdet.models.roi_heads.FCNMaskHead
num_convs=4, # mask head 中的卷积层数
in_channels=256, # 输入通道,应与 mask roi extractor 的输出通道一致
conv_out_channels=256, # 卷积层的输出通道
@@ -106,14 +106,14 @@ model = dict(
train_cfg = dict( # rpn 和 rcnn 训练超参数的配置
rpn=dict( # rpn 的训练配置
assigner=dict( # 分配器(assigner)的配置
- type='MaxIoUAssigner', # 分配器的类型,MaxIoUAssigner 用于许多常见的检测器,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/assigners/max_iou_assigner.py#L14
+ type='MaxIoUAssigner', # 分配器的类型,MaxIoUAssigner 用于许多常见的检测器,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/assigners/max_iou_assigner.py#L14
pos_iou_thr=0.7, # IoU >= 0.7(阈值) 被视为正样本
neg_iou_thr=0.3, # IoU < 0.3(阈值) 被视为负样本
min_pos_iou=0.3, # 将框作为正样本的最小 IoU 阈值
match_low_quality=True, # 是否匹配低质量的框(更多细节见 API 文档)
ignore_iof_thr=-1), # 忽略 bbox 的 IoF 阈值
sampler=dict( # 正/负采样器(sampler)的配置
- type='RandomSampler', # 采样器类型,还支持 PseudoSampler 和其他采样器,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/samplers/random_sampler.py#L14
+ type='RandomSampler', # 采样器类型,还支持 PseudoSampler 和其他采样器,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/samplers/random_sampler.py#L14
num=256, # 样本数量。
pos_fraction=0.5, # 正样本占总样本的比例
neg_pos_ub=-1, # 基于正样本数量的负样本上限
@@ -133,14 +133,14 @@ model = dict(
min_bbox_size=0), # 允许的最小 box 尺寸
rcnn=dict( # roi head 的配置。
assigner=dict( # 第二阶段分配器的配置,这与 rpn 中的不同
- type='MaxIoUAssigner', # 分配器的类型,MaxIoUAssigner 目前用于所有 roi_heads。更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/assigners/max_iou_assigner.py#L14
+ type='MaxIoUAssigner', # 分配器的类型,MaxIoUAssigner 目前用于所有 roi_heads。更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/assigners/max_iou_assigner.py#L14
pos_iou_thr=0.5, # IoU >= 0.5(阈值)被认为是正样本
neg_iou_thr=0.5, # IoU < 0.5(阈值)被认为是负样本
min_pos_iou=0.5, # 将 box 作为正样本的最小 IoU 阈值
match_low_quality=False, # 是否匹配低质量下的 box(有关更多详细信息,请参阅 API 文档)
ignore_iof_thr=-1), # 忽略 bbox 的 IoF 阈值
sampler=dict(
- type='RandomSampler', # 采样器的类型,还支持 PseudoSampler 和其他采样器,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/models/task_modules/samplers/random_sampler.py#L14
+ type='RandomSampler', # 采样器的类型,还支持 PseudoSampler 和其他采样器,更多细节请参考 https://github.com/open-mmlab/mmdetection/blob/main/mmdet/models/task_modules/samplers/random_sampler.py#L14
num=512, # 样本数量
pos_fraction=0.25, # 正样本占总样本的比例
neg_pos_ub=-1, # 基于正样本数量的负样本上限
diff --git a/docs/zh_cn/user_guides/dataset_prepare.md b/docs/zh_cn/user_guides/dataset_prepare.md
index 4ebbd668a72..b33ec3bd309 100644
--- a/docs/zh_cn/user_guides/dataset_prepare.md
+++ b/docs/zh_cn/user_guides/dataset_prepare.md
@@ -1,6 +1,6 @@
## 数据集准备
-MMDetection 支持多个公共数据集,包括 [COCO](https://cocodataset.org/), [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC), [Cityscapes](https://www.cityscapes-dataset.com/) 和 [其他更多数据集](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/_base_/datasets)。
+MMDetection 支持多个公共数据集,包括 [COCO](https://cocodataset.org/), [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC), [Cityscapes](https://www.cityscapes-dataset.com/) 和 [其他更多数据集](https://github.com/open-mmlab/mmdetection/tree/main/configs/_base_/datasets)。
一些公共数据集,比如 Pascal VOC 及其镜像数据集,或者 COCO 等数据集都可以从官方网站或者镜像网站获取。注意:在检测任务中,Pascal VOC 2012 是 Pascal VOC 2007 的无交集扩展,我们通常将两者一起使用。 我们建议将数据集下载,然后解压到项目外部的某个文件夹内,然后通过符号链接的方式,将数据集根目录链接到 `$MMDETECTION/data` 文件夹下, 如果你的文件夹结构和下方不同的话,你需要在配置文件中改变对应的路径。
diff --git a/docs/zh_cn/user_guides/deploy.md b/docs/zh_cn/user_guides/deploy.md
index 135aeb5b0af..da2e7f68241 100644
--- a/docs/zh_cn/user_guides/deploy.md
+++ b/docs/zh_cn/user_guides/deploy.md
@@ -16,7 +16,7 @@
## 安装
-请参考[此处](https://mmdetection.readthedocs.io/en/3.x/get_started.html)安装 mmdet。然后,按照[说明](https://mmdeploy.readthedocs.io/zh_CN/1.x/get_started.html#mmdeploy)安装 mmdeploy。
+请参考[此处](https://mmdetection.readthedocs.io/en/latest/get_started.html)安装 mmdet。然后,按照[说明](https://mmdeploy.readthedocs.io/zh_CN/1.x/get_started.html#mmdeploy)安装 mmdeploy。
```{note}
如果安装的是 mmdeploy 预编译包,那么也请通过 'git clone https://github.com/open-mmlab/mmdeploy.git --depth=1' 下载 mmdeploy 源码。因为它包含了部署时要用到的配置文件
@@ -24,7 +24,7 @@
## 模型转换
-假设在安装步骤中,mmdetection 和 mmdeploy 代码库在同级目录下,并且当前的工作目录为 mmdetection 的根目录,那么以 [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py) 模型为例,你可以从[此处](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth)下载对应的 checkpoint,并使用以下代码将之转换为 onnx 模型:
+假设在安装步骤中,mmdetection 和 mmdeploy 代码库在同级目录下,并且当前的工作目录为 mmdetection 的根目录,那么以 [Faster R-CNN](https://github.com/open-mmlab/mmdetection/blob/main/configs/faster_rcnn/faster-rcnn_r50_fpn_1x_coco.py) 模型为例,你可以从[此处](https://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_1x_coco_20200130-047c8118.pth)下载对应的 checkpoint,并使用以下代码将之转换为 onnx 模型:
```python
from mmdeploy.apis import torch2onnx
diff --git a/docs/zh_cn/user_guides/inference.md b/docs/zh_cn/user_guides/inference.md
index c582dfa8279..1f504cc69e2 100644
--- a/docs/zh_cn/user_guides/inference.md
+++ b/docs/zh_cn/user_guides/inference.md
@@ -4,9 +4,9 @@ MMDetection 提供了许多预训练好的检测模型,可以在 [Model Zoo](h
推理具体指使用训练好的模型来检测图像上的目标,本文将会展示具体步骤。
-在 MMDetection 中,一个模型被定义为一个[配置文件](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/config.html) 和对应被存储在 checkpoint 文件内的模型参数的集合。
+在 MMDetection 中,一个模型被定义为一个[配置文件](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/config.html) 和对应被存储在 checkpoint 文件内的模型参数的集合。
-首先,我们建议从 [RTMDet](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet) 开始,其 [配置](https://github.com/open-mmlab/mmdetection/blob/3.x/configs/rtmdet/rtmdet_l_8xb32-300e_coco.py) 文件和 [checkpoint](https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_l_8xb32-300e_coco/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth) 文件在此。
+首先,我们建议从 [RTMDet](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet) 开始,其 [配置](https://github.com/open-mmlab/mmdetection/blob/main/configs/rtmdet/rtmdet_l_8xb32-300e_coco.py) 文件和 [checkpoint](https://download.openmmlab.com/mmdetection/v3.0/rtmdet/rtmdet_l_8xb32-300e_coco/rtmdet_l_8xb32-300e_coco_20220719_112030-5a0be7c4.pth) 文件在此。
我们建议将 checkpoint 文件下载到 `checkpoints` 文件夹内。
## 推理的高层编程接口
@@ -83,13 +83,13 @@ for frame in track_iter_progress(video_reader):
cv2.destroyAllWindows()
```
-Jupyter notebook 上的演示样例在 [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/3.x/demo/inference_demo.ipynb) 。
+Jupyter notebook 上的演示样例在 [demo/inference_demo.ipynb](https://github.com/open-mmlab/mmdetection/blob/main/demo/inference_demo.ipynb) 。
注意: `inference_detector` 目前仅支持单张图片的推理。
## 演示样例
-我们还提供了三个演示脚本,它们是使用高层编程接口实现的。[源码在此](https://github.com/open-mmlab/mmdetection/blob/3.x/demo) 。
+我们还提供了三个演示脚本,它们是使用高层编程接口实现的。[源码在此](https://github.com/open-mmlab/mmdetection/blob/main/demo) 。
### 图片样例
diff --git a/docs/zh_cn/user_guides/label_studio.md b/docs/zh_cn/user_guides/label_studio.md
index d465e523064..5d3c17326e0 100644
--- a/docs/zh_cn/user_guides/label_studio.md
+++ b/docs/zh_cn/user_guides/label_studio.md
@@ -40,7 +40,7 @@ mim install "mmcv>=2.0.0rc0"
安装 MMDetection
```shell
-git clone https://github.com/open-mmlab/mmdetection -b dev-3.x
+git clone https://github.com/open-mmlab/mmdetection
cd mmdetection
pip install -v -e .
```
diff --git a/docs/zh_cn/user_guides/test.md b/docs/zh_cn/user_guides/test.md
index 96e28c89219..1b165b049d9 100644
--- a/docs/zh_cn/user_guides/test.md
+++ b/docs/zh_cn/user_guides/test.md
@@ -46,7 +46,7 @@ bash tools/dist_test.sh \
假设你已经下载了 checkpoint 文件到 `checkpoints/` 文件下了。
-1. 测试 RTMDet 并可视化其结果。按任意键继续下张图片的测试。配置文件和 checkpoint 文件 [在此](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet) 。
+1. 测试 RTMDet 并可视化其结果。按任意键继续下张图片的测试。配置文件和 checkpoint 文件 [在此](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet) 。
```shell
python tools/test.py \
@@ -55,7 +55,7 @@ bash tools/dist_test.sh \
--show
```
-2. 测试 RTMDet,并为了之后的可视化保存绘制的图像。配置文件和 checkpoint 文件 [在此](https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet) 。
+2. 测试 RTMDet,并为了之后的可视化保存绘制的图像。配置文件和 checkpoint 文件 [在此](https://github.com/open-mmlab/mmdetection/tree/main/configs/rtmdet) 。
```shell
python tools/test.py \
@@ -117,7 +117,7 @@ bash tools/dist_test.sh \
### 不使用 Ground Truth 标注进行测试
-MMDetection 支持在不使用 ground-truth 标注的情况下对模型进行测试,这需要用到 `CocoDataset`。如果你的数据集格式不是 COCO 格式的,请将其转化成 COCO 格式。如果你的数据集格式是 VOC 或者 Cityscapes,你可以使用 [tools/dataset_converters](https://github.com/open-mmlab/mmdetection/tree/master/tools/dataset_converters) 内的脚本直接将其转化成 COCO 格式。如果是其他格式,可以使用 [images2coco 脚本](https://github.com/open-mmlab/mmdetection/tree/master/tools/dataset_converters/images2coco.py) 进行转换。
+MMDetection 支持在不使用 ground-truth 标注的情况下对模型进行测试,这需要用到 `CocoDataset`。如果你的数据集格式不是 COCO 格式的,请将其转化成 COCO 格式。如果你的数据集格式是 VOC 或者 Cityscapes,你可以使用 [tools/dataset_converters](https://github.com/open-mmlab/mmdetection/tree/main/tools/dataset_converters) 内的脚本直接将其转化成 COCO 格式。如果是其他格式,可以使用 [images2coco 脚本](https://github.com/open-mmlab/mmdetection/tree/master/tools/dataset_converters/images2coco.py) 进行转换。
```shell
python tools/dataset_converters/images2coco.py \
diff --git a/docs/zh_cn/user_guides/train.md b/docs/zh_cn/user_guides/train.md
index 0ae65b24366..428eb11d9b3 100644
--- a/docs/zh_cn/user_guides/train.md
+++ b/docs/zh_cn/user_guides/train.md
@@ -134,7 +134,7 @@ Slurm 是一个常见的计算集群调度系统。在 Slurm 管理的集群上
GPUS=16 ./tools/slurm_train.sh dev mask_r50_1x configs/mask_rcnn_r50_fpn_1x_coco.py /nfs/xxxx/mask_rcnn_r50_fpn_1x
```
-你可以查看 [源码](https://github.com/open-mmlab/mmdetection/blob/master/tools/slurm_train.sh) 来检查全部的参数和环境变量.
+你可以查看 [源码](https://github.com/open-mmlab/mmdetection/blob/main/tools/slurm_train.sh) 来检查全部的参数和环境变量.
在使用 Slurm 时,端口需要以下方的某个方法之一来设置。
@@ -438,7 +438,7 @@ load_from = 'https://download.openmmlab.com/mmdetection/v2.0/mask_rcnn/mask_rcnn
python tools/train.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py
```
-参考 [在标准数据集上训练预定义的模型](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/train.html#id1) 来获取更多详细的使用方法。
+参考 [在标准数据集上训练预定义的模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/train.html#id1) 来获取更多详细的使用方法。
## 测试以及推理
@@ -448,4 +448,4 @@ python tools/train.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon
python tools/test.py configs/balloon/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon.py work_dirs/mask-rcnn_r50-caffe_fpn_ms-poly-1x_balloon/epoch_12.pth
```
-参考 [测试现有模型](https://mmdetection.readthedocs.io/zh_CN/3.x/user_guides/test.html) 来获取更多详细的使用方法。
+参考 [测试现有模型](https://mmdetection.readthedocs.io/zh_CN/latest/user_guides/test.html) 来获取更多详细的使用方法。
diff --git a/docs/zh_cn/user_guides/useful_hooks.md b/docs/zh_cn/user_guides/useful_hooks.md
index 7d24ec4608a..07a59df2a8b 100644
--- a/docs/zh_cn/user_guides/useful_hooks.md
+++ b/docs/zh_cn/user_guides/useful_hooks.md
@@ -9,7 +9,7 @@ MMDetection 和 MMEngine 为用户提供了多种多样实用的钩子(Hook)
## MemoryProfilerHook
-[内存分析钩子](https://github.com/open-mmlab/mmdetection/blob/3.x/mmdet/engine/hooks/memory_profiler_hook.py)
+[内存分析钩子](https://github.com/open-mmlab/mmdetection/blob/main/mmdet/engine/hooks/memory_profiler_hook.py)
记录了包括虚拟内存、交换内存、当前进程在内的所有内存信息,它能够帮助捕捉系统的使用状况与发现隐藏的内存泄露问题。为了使用这个钩子,你需要先通过 `pip install memory_profiler psutil` 命令安装 `memory_profiler` 和 `psutil`。
### 使用
diff --git a/docs/zh_cn/user_guides/useful_tools.md b/docs/zh_cn/user_guides/useful_tools.md
index e2b2d626d70..e53ffdfc60a 100644
--- a/docs/zh_cn/user_guides/useful_tools.md
+++ b/docs/zh_cn/user_guides/useful_tools.md
@@ -296,7 +296,7 @@ Params: 37.74 M
**注意**:这个工具还只是实验性质,我们不保证这个数值是绝对正确的。你可以将他用于简单的比较,但如果用于科技论文报告需要再三检查确认。
1. FLOPs 与输入的形状大小相关,参数量没有这个关系,默认的输入形状大小为 (1, 3, 1280, 800) 。
-2. 一些算子并不计入 FLOPs,比如 GN 或其他自定义的算子。你可以参考 [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/dev-3.x/mmcv/cnn/utils/flops_counter.py) 查看更详细的说明。
+2. 一些算子并不计入 FLOPs,比如 GN 或其他自定义的算子。你可以参考 [`mmcv.cnn.get_model_complexity_info()`](https://github.com/open-mmlab/mmcv/blob/2.x/mmcv/cnn/utils/flops_counter.py) 查看更详细的说明。
3. 两阶段检测的 FLOPs 大小取决于 proposal 的数量。
## 模型转换
diff --git a/mmdet/__init__.py b/mmdet/__init__.py
index d48c523bc79..e9c1489c7e9 100644
--- a/mmdet/__init__.py
+++ b/mmdet/__init__.py
@@ -9,7 +9,7 @@
mmcv_maximum_version = '2.1.0'
mmcv_version = digit_version(mmcv.__version__)
-mmengine_minimum_version = '0.6.0'
+mmengine_minimum_version = '0.7.1'
mmengine_maximum_version = '1.0.0'
mmengine_version = digit_version(mmengine.__version__)
diff --git a/mmdet/datasets/base_det_dataset.py b/mmdet/datasets/base_det_dataset.py
index 379cc4e9f63..cbc6bad46f9 100644
--- a/mmdet/datasets/base_det_dataset.py
+++ b/mmdet/datasets/base_det_dataset.py
@@ -35,7 +35,7 @@ def __init__(self,
raise RuntimeError(
'The `file_client_args` is deprecated, '
'please use `backend_args` instead, please refer to'
- 'https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/datasets/coco_detection.py' # noqa: E501
+ 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501
)
super().__init__(*args, **kwargs)
diff --git a/mmdet/datasets/transforms/loading.py b/mmdet/datasets/transforms/loading.py
index 69e9a0ac621..1a408e4d4ec 100644
--- a/mmdet/datasets/transforms/loading.py
+++ b/mmdet/datasets/transforms/loading.py
@@ -110,7 +110,7 @@ def __init__(
raise RuntimeError(
'The `file_client_args` is deprecated, '
'please use `backend_args` instead, please refer to'
- 'https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/datasets/coco_detection.py' # noqa: E501
+ 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501
)
def transform(self, results: dict) -> dict:
diff --git a/mmdet/evaluation/metrics/cityscapes_metric.py b/mmdet/evaluation/metrics/cityscapes_metric.py
index 84c35390bee..e5cdc179a3c 100644
--- a/mmdet/evaluation/metrics/cityscapes_metric.py
+++ b/mmdet/evaluation/metrics/cityscapes_metric.py
@@ -103,7 +103,7 @@ def __init__(self,
raise RuntimeError(
'The `file_client_args` is deprecated, '
'please use `backend_args` instead, please refer to'
- 'https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/datasets/coco_detection.py' # noqa: E501
+ 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501
)
self.seg_prefix = seg_prefix
diff --git a/mmdet/evaluation/metrics/coco_metric.py b/mmdet/evaluation/metrics/coco_metric.py
index 00c8421c254..f77d6516bfa 100644
--- a/mmdet/evaluation/metrics/coco_metric.py
+++ b/mmdet/evaluation/metrics/coco_metric.py
@@ -115,7 +115,7 @@ def __init__(self,
raise RuntimeError(
'The `file_client_args` is deprecated, '
'please use `backend_args` instead, please refer to'
- 'https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/datasets/coco_detection.py' # noqa: E501
+ 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501
)
# if ann_file is not specified,
diff --git a/mmdet/evaluation/metrics/coco_panoptic_metric.py b/mmdet/evaluation/metrics/coco_panoptic_metric.py
index 1ccf796d917..475e51dbc19 100644
--- a/mmdet/evaluation/metrics/coco_panoptic_metric.py
+++ b/mmdet/evaluation/metrics/coco_panoptic_metric.py
@@ -115,7 +115,7 @@ def __init__(self,
raise RuntimeError(
'The `file_client_args` is deprecated, '
'please use `backend_args` instead, please refer to'
- 'https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/datasets/coco_detection.py' # noqa: E501
+ 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501
)
if ann_file:
diff --git a/mmdet/evaluation/metrics/crowdhuman_metric.py b/mmdet/evaluation/metrics/crowdhuman_metric.py
index 3bec5b53685..de2a54edc2b 100644
--- a/mmdet/evaluation/metrics/crowdhuman_metric.py
+++ b/mmdet/evaluation/metrics/crowdhuman_metric.py
@@ -100,7 +100,7 @@ def __init__(self,
raise RuntimeError(
'The `file_client_args` is deprecated, '
'please use `backend_args` instead, please refer to'
- 'https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/datasets/coco_detection.py' # noqa: E501
+ 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501
)
assert eval_mode in [0, 1, 2], \
diff --git a/mmdet/evaluation/metrics/dump_proposals_metric.py b/mmdet/evaluation/metrics/dump_proposals_metric.py
index 68dc2d5ab84..9e9c53654c1 100644
--- a/mmdet/evaluation/metrics/dump_proposals_metric.py
+++ b/mmdet/evaluation/metrics/dump_proposals_metric.py
@@ -53,7 +53,7 @@ def __init__(self,
raise RuntimeError(
'The `file_client_args` is deprecated, '
'please use `backend_args` instead, please refer to'
- 'https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/datasets/coco_detection.py' # noqa: E501
+ 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501
)
self.output_dir = output_dir
assert proposals_file.endswith(('.pkl', '.pickle')), \
diff --git a/mmdet/evaluation/metrics/lvis_metric.py b/mmdet/evaluation/metrics/lvis_metric.py
index b4b3dd44f9a..e4dd6141c0e 100644
--- a/mmdet/evaluation/metrics/lvis_metric.py
+++ b/mmdet/evaluation/metrics/lvis_metric.py
@@ -122,7 +122,7 @@ def __init__(self,
raise RuntimeError(
'The `file_client_args` is deprecated, '
'please use `backend_args` instead, please refer to'
- 'https://github.com/open-mmlab/mmdetection/blob/dev-3.x/configs/_base_/datasets/coco_detection.py' # noqa: E501
+ 'https://github.com/open-mmlab/mmdetection/blob/main/configs/_base_/datasets/coco_detection.py' # noqa: E501
)
# if ann_file is not specified,
diff --git a/mmdet/version.py b/mmdet/version.py
index 56a7e9d62ce..24951882f40 100644
--- a/mmdet/version.py
+++ b/mmdet/version.py
@@ -1,6 +1,6 @@
# Copyright (c) OpenMMLab. All rights reserved.
-__version__ = '3.0.0rc6'
+__version__ = '3.0.0'
short_version = __version__
diff --git a/projects/Detic/README.md b/projects/Detic/README.md
index 4e99779342d..871b426e895 100644
--- a/projects/Detic/README.md
+++ b/projects/Detic/README.md
@@ -145,10 +145,10 @@ A project does not necessarily have to be finished in a single PR, but it's esse
- [ ] Metafile.yml
-
+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
-
+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
diff --git a/projects/DiffusionDet/README.md b/projects/DiffusionDet/README.md
index c96f3c82943..5542d9a59a0 100644
--- a/projects/DiffusionDet/README.md
+++ b/projects/DiffusionDet/README.md
@@ -1,6 +1,6 @@
## Description
-This is an implementation of [DiffusionDet](https://github.com/ShoufaChen/DiffusionDet) based on [MMDetection](https://github.com/open-mmlab/mmdetection/tree/3.x), [MMCV](https://github.com/open-mmlab/mmcv), and [MMEngine](https://github.com/open-mmlab/mmengine).
+This is an implementation of [DiffusionDet](https://github.com/ShoufaChen/DiffusionDet) based on [MMDetection](https://github.com/open-mmlab/mmdetection/tree/main), [MMCV](https://github.com/open-mmlab/mmcv), and [MMEngine](https://github.com/open-mmlab/mmengine).
@@ -163,10 +163,10 @@ A project does not necessarily have to be finished in a single PR, but it's esse
- [ ] Metafile.yml
-
+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
-
+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
diff --git a/projects/EfficientDet/README.md b/projects/EfficientDet/README.md
index 4de4d9700cb..36f4ed403a3 100644
--- a/projects/EfficientDet/README.md
+++ b/projects/EfficientDet/README.md
@@ -6,7 +6,7 @@
## Abstract
-This is an implementation of [EfficientDet](https://github.com/google/automl) based on [MMDetection](https://github.com/open-mmlab/mmdetection/tree/3.x), [MMCV](https://github.com/open-mmlab/mmcv), and [MMEngine](https://github.com/open-mmlab/mmengine).
+This is an implementation of [EfficientDet](https://github.com/google/automl) based on [MMDetection](https://github.com/open-mmlab/mmdetection/tree/main), [MMCV](https://github.com/open-mmlab/mmcv), and [MMEngine](https://github.com/open-mmlab/mmengine).
EfficientDet a new family of object detectors, which consistently achieve much better efficiency than prior art across a wide
spectrum of resource constraints.
@@ -145,10 +145,10 @@ A project does not necessarily have to be finished in a single PR, but it's esse
- [ ] Metafile.yml
-
+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
-
+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
diff --git a/projects/SparseInst/README.md b/projects/SparseInst/README.md
index 54602f65452..86e1521ab60 100644
--- a/projects/SparseInst/README.md
+++ b/projects/SparseInst/README.md
@@ -14,7 +14,7 @@ Tianheng Cheng, Xinggang Wang
## Description
-This is an implementation of [SparseInst](https://github.com/hustvl/SparseInst) based on [MMDetection](https://github.com/open-mmlab/mmdetection/tree/3.x), [MMCV](https://github.com/open-mmlab/mmcv), and [MMEngine](https://github.com/open-mmlab/mmengine).
+This is an implementation of [SparseInst](https://github.com/hustvl/SparseInst) based on [MMDetection](https://github.com/open-mmlab/mmdetection/tree/main), [MMCV](https://github.com/open-mmlab/mmcv), and [MMEngine](https://github.com/open-mmlab/mmengine).
**SparseInst** is a conceptually novel, efficient, and fully convolutional framework for real-time instance segmentation.
In contrast to region boxes or anchors (centers), SparseInst adopts a sparse set of **instance activation maps** as object representation, to highlight informative regions for each foreground objects.
@@ -122,10 +122,10 @@ A project does not necessarily have to be finished in a single PR, but it's esse
- [ ] Metafile.yml
-
+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
-
+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
diff --git a/projects/example_project/README.md b/projects/example_project/README.md
index a29a6998d35..67879104b9f 100644
--- a/projects/example_project/README.md
+++ b/projects/example_project/README.md
@@ -1,6 +1,6 @@
# Dummy ResNet Wrapper
-This is an example README for community `projects/`. We have provided detailed explanations for each field in the form of html comments, which are visible when you read the source of this README file. If you wish to submit your project to our main repository, then all the fields in this README are mandatory for others to understand what you have achieved in this implementation. For more details, read our [contribution guide](https://mmdetection.readthedocs.io/en/3.x/notes/contribution_guide.html) or approach us in [Discussions](https://github.com/open-mmlab/mmdetection/discussions).
+This is an example README for community `projects/`. We have provided detailed explanations for each field in the form of html comments, which are visible when you read the source of this README file. If you wish to submit your project to our main repository, then all the fields in this README are mandatory for others to understand what you have achieved in this implementation. For more details, read our [contribution guide](https://mmdetection.readthedocs.io/en/main/notes/contribution_guide.html) or approach us in [Discussions](https://github.com/open-mmlab/mmdetection/discussions).
## Description
@@ -38,7 +38,7 @@ python tools/test.py projects/example_project/configs/faster-rcnn_dummy-resnet_f
## Results
-
| Method | Backbone | Pretrained Model | Training set | Test set | #epoch | box AP | Download |
@@ -107,10 +107,10 @@ A project does not necessarily have to be finished in a single PR, but it's esse
- [ ] Metafile.yml
-
+
- [ ] Move your modules into the core package following the codebase's file hierarchy structure.
-
+
- [ ] Refactor your modules into the core package following the codebase's file hierarchy structure.
diff --git a/requirements/mminstall.txt b/requirements/mminstall.txt
index abe49081a48..4213aa6bbd9 100644
--- a/requirements/mminstall.txt
+++ b/requirements/mminstall.txt
@@ -1,2 +1,2 @@
mmcv>=2.0.0rc4,<2.1.0
-mmengine>=0.6.0,<1.0.0
+mmengine>=0.7.1,<1.0.0
diff --git a/requirements/readthedocs.txt b/requirements/readthedocs.txt
index b3e967ea01c..bf5ee9a4696 100644
--- a/requirements/readthedocs.txt
+++ b/requirements/readthedocs.txt
@@ -1,5 +1,5 @@
-mmcv>=2.0.0rc1,<2.1.0
-mmengine>=0.1.0,<1.0.0
+mmcv>=2.0.0rc4,<2.1.0
+mmengine>=0.7.1,<1.0.0
scipy
torch
torchvision