diff --git a/README.md b/README.md
index e3debf83..5f73b873 100644
--- a/README.md
+++ b/README.md
@@ -6,16 +6,16 @@
**基于飞桨框架开发的高性能遥感图像处理开发套件,端到端地完成从训练到部署的全流程遥感深度学习应用。**
-
+
[![license](https://img.shields.io/badge/license-Apache%202-blue.svg)](LICENSE)
- [![build status](https://github.com/PaddleCV-SIG/PaddleRS/workflows/build_and_test.yaml/badge.svg?branch=develop)](https://github.com/PaddleCV-SIG/PaddleRS/actions)
+ [![build status](https://github.com/PaddlePaddle/PaddleRS/actions/workflows/build_and_test.yaml/badge.svg?branch=develop)](https://github.com/PaddlePaddle/PaddleRS/actions)
![python version](https://img.shields.io/badge/python-3.7+-orange.svg)
![support os](https://img.shields.io/badge/os-linux%2C%20win%2C%20mac-yellow.svg)
## 最新动态
-* [2022-05-19] 🔥 PaddleRS发布1.0-beta版本,全面支持遥感领域深度学习任务。详细发版信息请参考[Release Note](https://github.com/PaddleCV-SIG/PaddleRS/releases)。
+* [2022-05-19] 🔥 PaddleRS发布1.0-beta版本,全面支持遥感领域深度学习任务。详细发版信息请参考[Release Note](https://github.com/PaddlePaddle/PaddleRS/releases)。
## 简介
@@ -173,7 +173,7 @@ PaddleRS是遥感科研院所、相关高校共同基于飞桨开发的遥感处
## 技术交流
-* 如果你发现任何PaddleRS存在的问题或者是建议, 欢迎通过[GitHub Issues](https://github.com/PaddleCV-SIG/PaddleRS/issues)给我们提issues。
+* 如果你发现任何PaddleRS存在的问题或者是建议, 欢迎通过[GitHub Issues](https://github.com/PaddlePaddle/PaddleRS/issues)给我们提issues。
* 欢迎加入PaddleRS 微信群
@@ -199,7 +199,7 @@ PaddleRS是遥感科研院所、相关高校共同基于飞桨开发的遥感处
* [变化检测示例](./docs/cases/csc_cd_cn.md)
* [超分模块示例](./docs/cases/sr_seg_cn.md)
* 代码贡献
- * [PaddleRS代码注释规范](https://github.com/PaddleCV-SIG/PaddleRS/wiki/PaddleRS代码注释规范)
+ * [PaddleRS代码注释规范](https://github.com/PaddlePaddle/PaddleRS/wiki/PaddleRS代码注释规范)
## 开源贡献
@@ -219,7 +219,7 @@ PaddleRS是遥感科研院所、相关高校共同基于飞桨开发的遥感处
@misc{paddlers2022,
title={PaddleRS, Awesome Remote Sensing Toolkit based on PaddlePaddle},
author={PaddlePaddle Authors},
- howpublished = {\url{https://github.com/PaddleCV-SIG/PaddleRS}},
+ howpublished = {\url{https://github.com/PaddlePaddle/PaddleRS}},
year={2022}
}
```
diff --git a/docs/apis/transforms.md b/docs/apis/transforms.md
index 9721ec7d..f43b3e06 100644
--- a/docs/apis/transforms.md
+++ b/docs/apis/transforms.md
@@ -36,10 +36,12 @@ from paddlers.datasets import CDDataset
train_transforms = T.Compose([
+ T.DecodeImg(),
T.Resize(target_size=512),
T.RandomHorizontalFlip(),
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
train_dataset = CDDataset(
diff --git a/docs/cases/csc_cd_cn.md b/docs/cases/csc_cd_cn.md
index 3974c55e..169fb10d 100644
--- a/docs/cases/csc_cd_cn.md
+++ b/docs/cases/csc_cd_cn.md
@@ -79,7 +79,7 @@ print("数据集划分已完成。")
## 模型训练与推理
-本项目使用[PaddleRS](https://github.com/PaddleCV-SIG/PaddleRS)套件搭建模型训练与推理框架。PaddleRS是基于飞桨开发的遥感处理平台,支持遥感图像分类、目标检测、图像分割、以及变化检测等常用遥感任务,能够帮助开发者更便捷地完成从训练到部署全流程遥感深度学习应用。在变化检测方面,PaddleRS目前支持9个state-of-the-art(SOTA)模型,且复杂的训练和推理过程被封装到数个API中,能够提供开箱即用的用户体验。
+本项目使用[PaddleRS](https://github.com/PaddlePaddle/PaddleRS)套件搭建模型训练与推理框架。PaddleRS是基于飞桨开发的遥感处理平台,支持遥感图像分类、目标检测、图像分割、以及变化检测等常用遥感任务,能够帮助开发者更便捷地完成从训练到部署全流程遥感深度学习应用。在变化检测方面,PaddleRS目前支持9个state-of-the-art(SOTA)模型,且复杂的训练和推理过程被封装到数个API中,能够提供开箱即用的用户体验。
```python
# 安装第三方库
@@ -365,7 +365,7 @@ class InferDataset(paddle.io.Dataset):
names = []
for line in lines:
items = line.strip().split(' ')
- items = list(map(pdrs.utils.path_normalization, items))
+ items = list(map(pdrs.utils.norm_path, items))
item_dict = {
'image_t1': osp.join(data_dir, items[0]),
'image_t2': osp.join(data_dir, items[1])
@@ -588,5 +588,5 @@ Image.frombytes('RGB', fig.canvas.get_width_height(), fig.canvas.tostring_rgb())
## 参考资料
-- [遥感数据介绍](https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/data/rs_data_cn.md)
-- [PaddleRS文档](https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/tutorials/train/README.md)
+- [遥感数据介绍](https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/data/rs_data_cn.md)
+- [PaddleRS文档](https://github.com/PaddlePaddle/PaddleRS/blob/develop/tutorials/train/README.md)
diff --git a/docs/cases/sr_seg_cn.md b/docs/cases/sr_seg_cn.md
index 313f145e..9247df69 100644
--- a/docs/cases/sr_seg_cn.md
+++ b/docs/cases/sr_seg_cn.md
@@ -66,7 +66,7 @@ plt.show()
```python
# 从github上克隆仓库
-!git clone https://github.com/PaddleCV-SIG/PaddleRS.git
+!git clone https://github.com/PaddlePaddle/PaddleRS.git
```
```python
@@ -221,4 +221,4 @@ for filename in img_list:
## 五、总结
- 本项目调用PaddleRS提供的超分重建接口,选用DRN模型对真实采集的低分辨率影像进行重建,再对重建后的图像进行分割,从结果上看,**超分重建后的图片的分割结果更好**
- **不足之处**:虽然相对于低分辨率影像,超分重建后的预测精度从目视的角度有所提高,但是并没有达到UDD6测试集中的效果,所以**模型的泛化能力也需要提高才行,光靠超分重建依然不够**
-- **后续工作**:将会把超分重建这一步整合到PaddleRS中的transform模块,在high-level任务预测之前可以进行调用改善图像质量,请大家多多关注[PaddleRS](https://github.com/PaddleCV-SIG/PaddleRS)
+- **后续工作**:将会把超分重建这一步整合到PaddleRS中的transform模块,在high-level任务预测之前可以进行调用改善图像质量,请大家多多关注[PaddleRS](https://github.com/PaddlePaddle/PaddleRS)
diff --git a/docs/data/tools.md b/docs/data/tools.md
index bdbde7d1..8d4793fd 100644
--- a/docs/data/tools.md
+++ b/docs/data/tools.md
@@ -17,7 +17,7 @@
首先需要`clone`此repo并进入到`tools`的文件夹中:
```shell
-git clone https://github.com/PaddleCV-SIG/PaddleRS.git
+git clone https://github.com/PaddlePaddle/PaddleRS.git
cd PaddleRS\tools
```
diff --git a/docs/quick_start.md b/docs/quick_start.md
index ff9556b8..3be27bae 100644
--- a/docs/quick_start.md
+++ b/docs/quick_start.md
@@ -10,7 +10,7 @@
PaddleRS代码会跟随开发进度不断更新,可以安装develop分支的代码使用最新的功能,安装方式如下:
```
-git clone https://github.com/PaddleCV-SIG/PaddleRS
+git clone https://github.com/PaddlePaddle/PaddleRS
cd PaddleRS
git checkout develop
pip install -r requirements.txt
diff --git a/paddlers/datasets/base.py b/paddlers/datasets/base.py
new file mode 100644
index 00000000..125ed01b
--- /dev/null
+++ b/paddlers/datasets/base.py
@@ -0,0 +1,35 @@
+# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from copy import deepcopy
+
+from paddle.io import Dataset
+
+from paddlers.utils import get_num_workers
+
+
+class BaseDataset(Dataset):
+ def __init__(self, data_dir, label_list, transforms, num_workers, shuffle):
+ super(BaseDataset, self).__init__()
+
+ self.data_dir = data_dir
+ self.label_list = label_list
+ self.transforms = deepcopy(transforms)
+ self.num_workers = get_num_workers(num_workers)
+ self.shuffle = shuffle
+
+ def __getitem__(self, idx):
+ sample = deepcopy(self.file_list[idx])
+ outputs = self.transforms(sample)
+ return outputs
\ No newline at end of file
diff --git a/paddlers/datasets/cd_dataset.py b/paddlers/datasets/cd_dataset.py
index c0576ad0..1250dab1 100644
--- a/paddlers/datasets/cd_dataset.py
+++ b/paddlers/datasets/cd_dataset.py
@@ -16,12 +16,11 @@
from enum import IntEnum
import os.path as osp
-from paddle.io import Dataset
+from .base import BaseDataset
+from paddlers.utils import logging, get_encoding, norm_path, is_pic
-from paddlers.utils import logging, get_num_workers, get_encoding, path_normalization, is_pic
-
-class CDDataset(Dataset):
+class CDDataset(BaseDataset):
"""
读取变化检测任务数据集,并对样本进行相应的处理(来自SegDataset,图像标签需要两个)。
@@ -31,8 +30,10 @@ class CDDataset(Dataset):
False(默认设置)时,文件中每一行应依次包含第一时相影像、第二时相影像以及变化检测标签的路径;当`with_seg_labels`为True时,
文件中每一行应依次包含第一时相影像、第二时相影像、变化检测标签、第一时相建筑物标签以及第二时相建筑物标签的路径。
label_list (str): 描述数据集包含的类别信息文件路径。默认值为None。
- transforms (paddlers.transforms): 数据集中每个样本的预处理/增强算子。
- num_workers (int|str): 数据集中样本在预处理过程中的线程或进程数。默认为'auto'。
+ transforms (paddlers.transforms.Compose): 数据集中每个样本的预处理/增强算子。
+ num_workers (int|str): 数据集中样本在预处理过程中的线程或进程数。默认为'auto'。当设为'auto'时,根据
+ 系统的实际CPU核数设置`num_workers`: 如果CPU核数的一半大于8,则`num_workers`为8,否则为CPU核数的
+ 一半。
shuffle (bool): 是否需要对数据集中样本打乱顺序。默认为False。
with_seg_labels (bool, optional): 数据集中是否包含两个时相的语义分割标签。默认为False。
binarize_labels (bool, optional): 是否对数据集中的标签进行二值化操作。默认为False。
@@ -47,15 +48,13 @@ def __init__(self,
shuffle=False,
with_seg_labels=False,
binarize_labels=False):
- super(CDDataset, self).__init__()
+ super(CDDataset, self).__init__(data_dir, label_list, transforms,
+ num_workers, shuffle)
DELIMETER = ' '
- self.transforms = copy.deepcopy(transforms)
# TODO: batch padding
self.batch_transforms = None
- self.num_workers = get_num_workers(num_workers)
- self.shuffle = shuffle
self.file_list = list()
self.labels = list()
self.with_seg_labels = with_seg_labels
@@ -82,7 +81,7 @@ def __init__(self,
"Line[{}] in file_list[{}] has an incorrect number of file paths.".
format(line.strip(), file_list))
- items = list(map(path_normalization, items))
+ items = list(map(norm_path, items))
full_path_im_t1 = osp.join(data_dir, items[0])
full_path_im_t2 = osp.join(data_dir, items[1])
@@ -128,9 +127,17 @@ def __init__(self,
def __getitem__(self, idx):
sample = copy.deepcopy(self.file_list[idx])
- outputs = self.transforms(sample)
+ sample = self.transforms.apply_transforms(sample)
+
if self.binarize_labels:
- outputs = outputs[:2] + tuple(map(self._binarize, outputs[2:]))
+ # Requires 'mask' to exist
+ sample['mask'] = self._binarize(sample['mask'])
+ if 'aux_masks' in sample:
+ sample['aux_masks'] = list(
+ map(self._binarize, sample['aux_masks']))
+
+ outputs = self.transforms.arrange_outputs(sample)
+
return outputs
def __len__(self):
diff --git a/paddlers/datasets/clas_dataset.py b/paddlers/datasets/clas_dataset.py
index 172513dd..5e1a8112 100644
--- a/paddlers/datasets/clas_dataset.py
+++ b/paddlers/datasets/clas_dataset.py
@@ -13,22 +13,22 @@
# limitations under the License.
import os.path as osp
-import copy
-from paddle.io import Dataset
+from .base import BaseDataset
+from paddlers.utils import logging, get_encoding, norm_path, is_pic
-from paddlers.utils import logging, get_num_workers, get_encoding, path_normalization, is_pic
-
-class ClasDataset(Dataset):
+class ClasDataset(BaseDataset):
"""读取图像分类任务数据集,并对样本进行相应的处理。
Args:
data_dir (str): 数据集所在的目录路径。
file_list (str): 描述数据集图片文件和对应标注序号(文本内每行路径为相对data_dir的相对路)。
label_list (str): 描述数据集包含的类别信息文件路径,文件格式为(类别 说明)。默认值为None。
- transforms (paddlers.transforms): 数据集中每个样本的预处理/增强算子。
- num_workers (int|str): 数据集中样本在预处理过程中的线程或进程数。默认为'auto'。
+ transforms (paddlers.transforms.Compose): 数据集中每个样本的预处理/增强算子。
+ num_workers (int|str): 数据集中样本在预处理过程中的线程或进程数。默认为'auto'。当设为'auto'时,根据
+ 系统的实际CPU核数设置`num_workers`: 如果CPU核数的一半大于8,则`num_workers`为8,否则为CPU核数的
+ 一半。
shuffle (bool): 是否需要对数据集中样本打乱顺序。默认为False。
"""
@@ -39,14 +39,11 @@ def __init__(self,
transforms=None,
num_workers='auto',
shuffle=False):
- super(ClasDataset, self).__init__()
- self.transforms = copy.deepcopy(transforms)
+ super(ClasDataset, self).__init__(data_dir, label_list, transforms,
+ num_workers, shuffle)
# TODO batch padding
self.batch_transforms = None
- self.num_workers = get_num_workers(num_workers)
- self.shuffle = shuffle
self.file_list = list()
- self.label_list = label_list
self.labels = list()
# TODO:非None时,让用户跳转数据集分析生成label_list
@@ -64,7 +61,7 @@ def __init__(self,
"A space is defined as the delimiter to separate the image and label path, " \
"so the space cannot be in the image or label path, but the line[{}] of " \
" file_list[{}] has a space in the image or label path.".format(line, file_list))
- items[0] = path_normalization(items[0])
+ items[0] = norm_path(items[0])
full_path_im = osp.join(data_dir, items[0])
label = items[1]
if not is_pic(full_path_im):
@@ -84,10 +81,5 @@ def __init__(self,
logging.info("{} samples in file {}".format(
len(self.file_list), file_list))
- def __getitem__(self, idx):
- sample = copy.deepcopy(self.file_list[idx])
- outputs = self.transforms(sample)
- return outputs
-
def __len__(self):
return len(self.file_list)
diff --git a/paddlers/datasets/coco.py b/paddlers/datasets/coco.py
index b4fc845f..bfd59fc9 100644
--- a/paddlers/datasets/coco.py
+++ b/paddlers/datasets/coco.py
@@ -20,14 +20,14 @@
from collections import OrderedDict
import numpy as np
-from paddle.io import Dataset
-from paddlers.utils import logging, get_num_workers, get_encoding, path_normalization, is_pic
+from .base import BaseDataset
+from paddlers.utils import logging, get_encoding, norm_path, is_pic
from paddlers.transforms import DecodeImg, MixupImage
from paddlers.tools import YOLOAnchorCluster
-class COCODetection(Dataset):
+class COCODetection(BaseDataset):
"""读取COCO格式的检测数据集,并对样本进行相应的处理。
Args:
@@ -35,7 +35,7 @@ class COCODetection(Dataset):
image_dir (str): 描述数据集图片文件路径。
anno_path (str): COCO标注文件路径。
label_list (str): 描述数据集包含的类别信息文件路径。
- transforms (paddlers.det.transforms): 数据集中每个样本的预处理/增强算子。
+ transforms (paddlers.transforms.Compose): 数据集中每个样本的预处理/增强算子。
num_workers (int|str): 数据集中样本在预处理过程中的线程或进程数。默认为'auto'。当设为'auto'时,根据
系统的实际CPU核数设置`num_workers`: 如果CPU核数的一半大于8,则`num_workers`为8,否则为CPU核数的
一半。
@@ -60,10 +60,10 @@ def __init__(self,
import matplotlib
matplotlib.use('Agg')
from pycocotools.coco import COCO
- super(COCODetection, self).__init__()
- self.data_dir = data_dir
+ super(COCODetection, self).__init__(data_dir, label_list, transforms,
+ num_workers, shuffle)
+
self.data_fields = None
- self.transforms = copy.deepcopy(transforms)
self.num_max_boxes = 50
self.use_mix = False
@@ -76,8 +76,6 @@ def __init__(self,
break
self.batch_transforms = None
- self.num_workers = get_num_workers(num_workers)
- self.shuffle = shuffle
self.allow_empty = allow_empty
self.empty_ratio = empty_ratio
self.file_list = list()
@@ -104,8 +102,8 @@ def __init__(self,
'name': k
})
- anno_path = path_normalization(os.path.join(self.data_dir, anno_path))
- image_dir = path_normalization(os.path.join(self.data_dir, image_dir))
+ anno_path = norm_path(os.path.join(self.data_dir, anno_path))
+ image_dir = norm_path(os.path.join(self.data_dir, image_dir))
assert anno_path.endswith('.json'), \
'invalid coco annotation file: ' + anno_path
diff --git a/paddlers/datasets/seg_dataset.py b/paddlers/datasets/seg_dataset.py
index 6bdbcf53..4d1f287d 100644
--- a/paddlers/datasets/seg_dataset.py
+++ b/paddlers/datasets/seg_dataset.py
@@ -15,20 +15,21 @@
import os.path as osp
import copy
-from paddle.io import Dataset
+from .base import BaseDataset
+from paddlers.utils import logging, get_encoding, norm_path, is_pic
-from paddlers.utils import logging, get_num_workers, get_encoding, path_normalization, is_pic
-
-class SegDataset(Dataset):
+class SegDataset(BaseDataset):
"""读取语义分割任务数据集,并对样本进行相应的处理。
Args:
data_dir (str): 数据集所在的目录路径。
file_list (str): 描述数据集图片文件和对应标注文件的文件路径(文本内每行路径为相对data_dir的相对路)。
label_list (str): 描述数据集包含的类别信息文件路径。默认值为None。
- transforms (paddlers.transforms): 数据集中每个样本的预处理/增强算子。
- num_workers (int|str): 数据集中样本在预处理过程中的线程或进程数。默认为'auto'。
+ transforms (paddlers.transforms.Compose): 数据集中每个样本的预处理/增强算子。
+ num_workers (int|str): 数据集中样本在预处理过程中的线程或进程数。默认为'auto'。当设为'auto'时,根据
+ 系统的实际CPU核数设置`num_workers`: 如果CPU核数的一半大于8,则`num_workers`为8,否则为CPU核数的
+ 一半。
shuffle (bool): 是否需要对数据集中样本打乱顺序。默认为False。
"""
@@ -39,12 +40,10 @@ def __init__(self,
transforms=None,
num_workers='auto',
shuffle=False):
- super(SegDataset, self).__init__()
- self.transforms = copy.deepcopy(transforms)
+ super(SegDataset, self).__init__(data_dir, label_list, transforms,
+ num_workers, shuffle)
# TODO batch padding
self.batch_transforms = None
- self.num_workers = get_num_workers(num_workers)
- self.shuffle = shuffle
self.file_list = list()
self.labels = list()
@@ -63,8 +62,8 @@ def __init__(self,
"A space is defined as the delimiter to separate the image and label path, " \
"so the space cannot be in the image or label path, but the line[{}] of " \
" file_list[{}] has a space in the image or label path.".format(line, file_list))
- items[0] = path_normalization(items[0])
- items[1] = path_normalization(items[1])
+ items[0] = norm_path(items[0])
+ items[1] = norm_path(items[1])
full_path_im = osp.join(data_dir, items[0])
full_path_label = osp.join(data_dir, items[1])
if not is_pic(full_path_im) or not is_pic(full_path_label):
@@ -83,10 +82,5 @@ def __init__(self,
logging.info("{} samples in file {}".format(
len(self.file_list), file_list))
- def __getitem__(self, idx):
- sample = copy.deepcopy(self.file_list[idx])
- outputs = self.transforms(sample)
- return outputs
-
def __len__(self):
return len(self.file_list)
diff --git a/paddlers/datasets/voc.py b/paddlers/datasets/voc.py
index 723e5d1a..bdc9b9c0 100644
--- a/paddlers/datasets/voc.py
+++ b/paddlers/datasets/voc.py
@@ -22,21 +22,21 @@
import xml.etree.ElementTree as ET
import numpy as np
-from paddle.io import Dataset
-from paddlers.utils import logging, get_num_workers, get_encoding, path_normalization, is_pic
+from .base import BaseDataset
+from paddlers.utils import logging, get_encoding, norm_path, is_pic
from paddlers.transforms import DecodeImg, MixupImage
from paddlers.tools import YOLOAnchorCluster
-class VOCDetection(Dataset):
+class VOCDetection(BaseDataset):
"""读取PascalVOC格式的检测数据集,并对样本进行相应的处理。
Args:
data_dir (str): 数据集所在的目录路径。
file_list (str): 描述数据集图片文件和对应标注文件的文件路径(文本内每行路径为相对data_dir的相对路)。
label_list (str): 描述数据集包含的类别信息文件路径。
- transforms (paddlers.det.transforms): 数据集中每个样本的预处理/增强算子。
+ transforms (paddlers.transforms.Compose): 数据集中每个样本的预处理/增强算子。
num_workers (int|str): 数据集中样本在预处理过程中的线程或进程数。默认为'auto'。当设为'auto'时,根据
系统的实际CPU核数设置`num_workers`: 如果CPU核数的一半大于8,则`num_workers`为8,否则为CPU核数的
一半。
@@ -60,10 +60,10 @@ def __init__(self,
import matplotlib
matplotlib.use('Agg')
from pycocotools.coco import COCO
- super(VOCDetection, self).__init__()
- self.data_dir = data_dir
+ super(VOCDetection, self).__init__(data_dir, label_list, transforms,
+ num_workers, shuffle)
+
self.data_fields = None
- self.transforms = copy.deepcopy(transforms)
self.num_max_boxes = 50
self.use_mix = False
@@ -76,8 +76,6 @@ def __init__(self,
break
self.batch_transforms = None
- self.num_workers = get_num_workers(num_workers)
- self.shuffle = shuffle
self.allow_empty = allow_empty
self.empty_ratio = empty_ratio
self.file_list = list()
@@ -117,8 +115,8 @@ def __init__(self,
img_file, xml_file = [
osp.join(data_dir, x) for x in line.strip().split()[:2]
]
- img_file = path_normalization(img_file)
- xml_file = path_normalization(xml_file)
+ img_file = norm_path(img_file)
+ xml_file = norm_path(xml_file)
if not is_pic(img_file):
continue
if not osp.isfile(xml_file):
diff --git a/paddlers/deploy/predictor.py b/paddlers/deploy/predictor.py
index 88970603..90f7581c 100644
--- a/paddlers/deploy/predictor.py
+++ b/paddlers/deploy/predictor.py
@@ -258,9 +258,9 @@ def predict(self,
Args:
img_file(list[str | tuple | np.ndarray] | str | tuple | np.ndarray): For scene classification, image restoration,
object detection and semantic segmentation tasks, `img_file` should be either the path of the image to predict
- , a decoded image (a `np.ndarray`, which should be consistent with what you get from passing image path to
- `paddlers.transforms.decode_image()`), or a list of image paths or decoded images. For change detection tasks,
- `img_file` should be a tuple of image paths, a tuple of decoded images, or a list of tuples.
+ , a decoded image (a np.ndarray, which should be consistent with what you get from passing image path to
+ paddlers.transforms.decode_image()), or a list of image paths or decoded images. For change detection tasks,
+ img_file should be a tuple of image paths, a tuple of decoded images, or a list of tuples.
topk(int, optional): Top-k values to reserve in a classification result. Defaults to 1.
transforms (paddlers.transforms.Compose | None, optional): Pipeline of data preprocessing. If None, load transforms
from `model.yml`. Defaults to None.
diff --git a/paddlers/tasks/base.py b/paddlers/tasks/base.py
index 8c67ea85..0e2e9012 100644
--- a/paddlers/tasks/base.py
+++ b/paddlers/tasks/base.py
@@ -30,7 +30,6 @@
import paddlers
import paddlers.utils.logging as logging
-from paddlers.transforms import arrange_transforms
from paddlers.utils import (seconds_to_hms, get_single_card_bs, dict2str,
get_pretrain_weights, load_pretrain_weights,
load_checkpoint, SmoothedValue, TrainingStats,
@@ -302,10 +301,7 @@ def train_loop(self,
early_stop=False,
early_stop_patience=5,
use_vdl=True):
- arrange_transforms(
- model_type=self.model_type,
- transforms=train_dataset.transforms,
- mode='train')
+ self._check_transforms(train_dataset.transforms, 'train')
if "RCNN" in self.__class__.__name__ and train_dataset.pos_num < len(
train_dataset.file_list):
@@ -488,10 +484,7 @@ def analyze_sensitivity(self,
assert criterion in {'l1_norm', 'fpgm'}, \
"Pruning criterion {} is not supported. Please choose from ['l1_norm', 'fpgm']"
- arrange_transforms(
- model_type=self.model_type,
- transforms=dataset.transforms,
- mode='eval')
+ self._check_transforms(dataset.transforms, 'eval')
if self.model_type == 'detector':
self.net.eval()
else:
@@ -670,3 +663,15 @@ def _export_inference_model(self, save_dir, image_shape=None):
open(osp.join(save_dir, '.success'), 'w').close()
logging.info("The model for the inference deployment is saved in {}.".
format(save_dir))
+
+ def _check_transforms(self, transforms, mode):
+ # NOTE: Check transforms and transforms.arrange and give user-friendly error messages.
+ if not isinstance(transforms, paddlers.transforms.Compose):
+ raise TypeError("`transforms` must be paddlers.transforms.Compose.")
+ arrange_obj = transforms.arrange
+ if not isinstance(arrange_obj, paddlers.transforms.operators.Arrange):
+ raise TypeError("`transforms.arrange` must be an Arrange object.")
+ if arrange_obj.mode != mode:
+ raise ValueError(
+ f"Incorrect arrange mode! Expected {mode} but got {arrange_obj.mode}."
+ )
diff --git a/paddlers/tasks/change_detector.py b/paddlers/tasks/change_detector.py
index 8af67ab8..6480fe7a 100644
--- a/paddlers/tasks/change_detector.py
+++ b/paddlers/tasks/change_detector.py
@@ -28,7 +28,6 @@
import paddlers.rs_models.cd as cmcd
import paddlers.utils.logging as logging
import paddlers.models.ppseg as paddleseg
-from paddlers.transforms import arrange_transforms
from paddlers.transforms import Resize, decode_image
from paddlers.utils import get_single_card_bs, DisablePrint
from paddlers.utils.checkpoint import seg_pretrain_weights_dict
@@ -137,6 +136,11 @@ def run(self, net, inputs, mode):
else:
pred = paddle.argmax(logit, axis=1, keepdim=True, dtype='int32')
label = inputs[2]
+ if label.ndim == 3:
+ paddle.unsqueeze_(label, axis=1)
+ if label.ndim != 4:
+ raise ValueError("Expected label.ndim == 4 but got {}".format(
+ label.ndim))
origin_shape = [label.shape[-2:]]
pred = self._postprocess(
pred, origin_shape, transforms=inputs[3])[0] # NCHW
@@ -396,10 +400,7 @@ def evaluate(self, eval_dataset, batch_size=1, return_details=False):
"category_F1-score": `F1 score`}.
"""
- arrange_transforms(
- model_type=self.model_type,
- transforms=eval_dataset.transforms,
- mode='eval')
+ self._check_transforms(eval_dataset.transforms, 'eval')
self.net.eval()
nranks = paddle.distributed.get_world_size()
@@ -641,8 +642,7 @@ def slider_predict(self,
print("GeoTiff saved in {}.".format(save_file))
def _preprocess(self, images, transforms, to_tensor=True):
- arrange_transforms(
- model_type=self.model_type, transforms=transforms, mode='test')
+ self._check_transforms(transforms, 'test')
batch_im1, batch_im2 = list(), list()
batch_ori_shape = list()
for im1, im2 in images:
@@ -786,6 +786,13 @@ def _infer_postprocess(self, batch_label_map, batch_score_map,
score_maps.append(score_map.squeeze())
return label_maps, score_maps
+ def _check_transforms(self, transforms, mode):
+ super()._check_transforms(transforms, mode)
+ if not isinstance(transforms.arrange,
+ paddlers.transforms.ArrangeChangeDetector):
+ raise TypeError(
+ "`transforms.arrange` must be an ArrangeChangeDetector object.")
+
class CDNet(BaseChangeDetector):
def __init__(self,
diff --git a/paddlers/tasks/classifier.py b/paddlers/tasks/classifier.py
index 86ec9653..4be1bb6c 100644
--- a/paddlers/tasks/classifier.py
+++ b/paddlers/tasks/classifier.py
@@ -25,7 +25,6 @@
import paddlers.models.ppcls as paddleclas
import paddlers.rs_models.clas as cmcls
import paddlers
-from paddlers.transforms import arrange_transforms
from paddlers.utils import get_single_card_bs, DisablePrint
import paddlers.utils.logging as logging
from .base import BaseModel
@@ -358,10 +357,7 @@ def evaluate(self, eval_dataset, batch_size=1, return_details=False):
"top5": `acc of top5`}.
"""
- arrange_transforms(
- model_type=self.model_type,
- transforms=eval_dataset.transforms,
- mode='eval')
+ self._check_transforms(eval_dataset.transforms, 'eval')
self.net.eval()
nranks = paddle.distributed.get_world_size()
@@ -460,8 +456,7 @@ def predict(self, img_file, transforms=None):
return prediction
def _preprocess(self, images, transforms, to_tensor=True):
- arrange_transforms(
- model_type=self.model_type, transforms=transforms, mode='test')
+ self._check_transforms(transforms, 'test')
batch_im = list()
batch_ori_shape = list()
for im in images:
@@ -527,6 +522,13 @@ def get_transforms_shape_info(batch_ori_shape, transforms):
batch_restore_list.append(restore_list)
return batch_restore_list
+ def _check_transforms(self, transforms, mode):
+ super()._check_transforms(transforms, mode)
+ if not isinstance(transforms.arrange,
+ paddlers.transforms.ArrangeClassifier):
+ raise TypeError(
+ "`transforms.arrange` must be an ArrangeClassifier object.")
+
class ResNet50_vd(BaseClassifier):
def __init__(self, num_classes=2, use_mixed_loss=False, **params):
diff --git a/paddlers/tasks/object_detector.py b/paddlers/tasks/object_detector.py
index 38950c93..7d3bda1e 100644
--- a/paddlers/tasks/object_detector.py
+++ b/paddlers/tasks/object_detector.py
@@ -31,7 +31,6 @@
from paddlers.transforms.operators import _NormalizeBox, _PadBox, _BboxXYXY2XYWH, Resize, Pad
from paddlers.transforms.batch_operators import BatchCompose, BatchRandomResize, BatchRandomResizeByShort, \
_BatchPad, _Gt2YoloTarget
-from paddlers.transforms import arrange_transforms
from .base import BaseModel
from .utils.det_metrics import VOCMetric, COCOMetric
from paddlers.models.ppdet.optimizer import ModelEMA
@@ -452,10 +451,7 @@ def evaluate(self,
}
eval_dataset.batch_transforms = self._compose_batch_transform(
eval_dataset.transforms, mode='eval')
- arrange_transforms(
- model_type=self.model_type,
- transforms=eval_dataset.transforms,
- mode='eval')
+ self._check_transforms(eval_dataset.transforms, 'eval')
self.net.eval()
nranks = paddle.distributed.get_world_size()
@@ -545,8 +541,7 @@ def predict(self, img_file, transforms=None):
return prediction
def _preprocess(self, images, transforms, to_tensor=True):
- arrange_transforms(
- model_type=self.model_type, transforms=transforms, mode='test')
+ self._check_transforms(transforms, 'test')
batch_samples = list()
for im in images:
if isinstance(im, str):
@@ -630,6 +625,13 @@ def _postprocess(self, batch_pred):
return results
+ def _check_transforms(self, transforms, mode):
+ super()._check_transforms(transforms, mode)
+ if not isinstance(transforms.arrange,
+ paddlers.transforms.ArrangeDetector):
+ raise TypeError(
+ "`transforms.arrange` must be an ArrangeDetector object.")
+
class PicoDet(BaseDetector):
def __init__(self,
diff --git a/paddlers/tasks/segmenter.py b/paddlers/tasks/segmenter.py
index 412ba71e..d09da841 100644
--- a/paddlers/tasks/segmenter.py
+++ b/paddlers/tasks/segmenter.py
@@ -26,7 +26,6 @@
import paddlers.models.ppseg as paddleseg
import paddlers.rs_models.seg as cmseg
import paddlers
-from paddlers.transforms import arrange_transforms
from paddlers.utils import get_single_card_bs, DisablePrint
import paddlers.utils.logging as logging
from .base import BaseModel
@@ -136,6 +135,11 @@ def run(self, net, inputs, mode):
else:
pred = paddle.argmax(logit, axis=1, keepdim=True, dtype='int32')
label = inputs[1]
+ if label.ndim == 3:
+ paddle.unsqueeze_(label, axis=1)
+ if label.ndim != 4:
+ raise ValueError("Expected label.ndim == 4 but got {}".format(
+ label.ndim))
origin_shape = [label.shape[-2:]]
pred = self._postprocess(
pred, origin_shape, transforms=inputs[2])[0] # NCHW
@@ -380,10 +384,7 @@ def evaluate(self, eval_dataset, batch_size=1, return_details=False):
"category_F1-score": `F1 score`}.
"""
- arrange_transforms(
- model_type=self.model_type,
- transforms=eval_dataset.transforms,
- mode='eval')
+ self._check_transforms(eval_dataset.transforms, 'eval')
self.net.eval()
nranks = paddle.distributed.get_world_size()
@@ -606,8 +607,7 @@ def slider_predict(self,
print("GeoTiff saved in {}.".format(save_file))
def _preprocess(self, images, transforms, to_tensor=True):
- arrange_transforms(
- model_type=self.model_type, transforms=transforms, mode='test')
+ self._check_transforms(transforms, 'test')
batch_im = list()
batch_ori_shape = list()
for im in images:
@@ -746,6 +746,13 @@ def _infer_postprocess(self, batch_label_map, batch_score_map,
score_maps.append(score_map.squeeze())
return label_maps, score_maps
+ def _check_transforms(self, transforms, mode):
+ super()._check_transforms(transforms, mode)
+ if not isinstance(transforms.arrange,
+ paddlers.transforms.ArrangeSegmenter):
+ raise TypeError(
+ "`transforms.arrange` must be an ArrangeSegmenter object.")
+
class UNet(BaseSegmenter):
def __init__(self,
diff --git a/paddlers/transforms/__init__.py b/paddlers/transforms/__init__.py
index 9e52157d..64f29769 100644
--- a/paddlers/transforms/__init__.py
+++ b/paddlers/transforms/__init__.py
@@ -29,15 +29,19 @@ def decode_image(im_path,
Decode an image.
Args:
+ im_path (str): Path of the image to decode.
to_rgb (bool, optional): If True, convert input image(s) from BGR format to RGB format. Defaults to True.
to_uint8 (bool, optional): If True, quantize and convert decoded image(s) to uint8 type. Defaults to True.
decode_bgr (bool, optional): If True, automatically interpret a non-geo image (e.g. jpeg images) as a BGR image.
Defaults to True.
decode_sar (bool, optional): If True, automatically interpret a two-channel geo image (e.g. geotiff images) as a
SAR image, set this argument to True. Defaults to True.
+
+ Returns:
+ np.ndarray: Decoded image.
"""
- # Do a presence check. `osp.exists` assumes `im_path` is a path-like object.
+ # Do a presence check. osp.exists() assumes `im_path` is a path-like object.
if not osp.exists(im_path):
raise ValueError(f"{im_path} does not exist!")
decoder = T.DecodeImg(
@@ -51,36 +55,14 @@ def decode_image(im_path,
return sample['image']
-def arrange_transforms(model_type, transforms, mode='train'):
- # 给transforms添加arrange操作
- if model_type == 'segmenter':
- if mode == 'eval':
- transforms.apply_im_only = True
- else:
- transforms.apply_im_only = False
- arrange_transform = ArrangeSegmenter(mode)
- elif model_type == 'change_detector':
- if mode == 'eval':
- transforms.apply_im_only = True
- else:
- transforms.apply_im_only = False
- arrange_transform = ArrangeChangeDetector(mode)
- elif model_type == 'classifier':
- arrange_transform = ArrangeClassifier(mode)
- elif model_type == 'detector':
- arrange_transform = ArrangeDetector(mode)
- else:
- raise Exception("Unrecognized model type: {}".format(model_type))
- transforms.arrange_outputs = arrange_transform
-
-
def build_transforms(transforms_info):
transforms = list()
for op_info in transforms_info:
op_name = list(op_info.keys())[0]
op_attr = op_info[op_name]
if not hasattr(T, op_name):
- raise Exception("There's no transform named '{}'".format(op_name))
+ raise ValueError(
+ "There is no transform operator named '{}'.".format(op_name))
transforms.append(getattr(T, op_name)(**op_attr))
eval_transforms = T.Compose(transforms)
return eval_transforms
diff --git a/paddlers/transforms/functions.py b/paddlers/transforms/functions.py
index 68c59a67..00d98414 100644
--- a/paddlers/transforms/functions.py
+++ b/paddlers/transforms/functions.py
@@ -21,6 +21,7 @@
from sklearn.linear_model import LinearRegression
from skimage import exposure
from joblib import load
+from PIL import Image
def normalize(im, mean, std, min_value=[0, 0, 0], max_value=[255, 255, 255]):
@@ -623,3 +624,19 @@ def inv_pca(im, joblib_path):
r_im = pca.inverse_transform(n_im)
r_im = np.reshape(r_im, (H, W, -1))
return r_im
+
+
+def decode_seg_mask(mask_path):
+ """
+ Decode a segmentation mask image.
+
+ Args:
+ mask_path (str): Path of the mask image to decode.
+
+ Returns:
+ np.ndarray: Decoded mask image.
+ """
+
+ mask = np.asarray(Image.open(mask_path))
+ mask = mask.astype('int64')
+ return mask
diff --git a/paddlers/transforms/operators.py b/paddlers/transforms/operators.py
index 7783bba3..e14e4661 100644
--- a/paddlers/transforms/operators.py
+++ b/paddlers/transforms/operators.py
@@ -30,39 +30,21 @@
from joblib import load
import paddlers
-from .functions import normalize, horizontal_flip, permute, vertical_flip, center_crop, is_poly, \
- horizontal_flip_poly, horizontal_flip_rle, vertical_flip_poly, vertical_flip_rle, crop_poly, \
- crop_rle, expand_poly, expand_rle, resize_poly, resize_rle, dehaze, select_bands, \
- to_intensity, to_uint8, img_flip, img_simple_rotate
+from .functions import (
+ normalize, horizontal_flip, permute, vertical_flip, center_crop, is_poly,
+ horizontal_flip_poly, horizontal_flip_rle, vertical_flip_poly,
+ vertical_flip_rle, crop_poly, crop_rle, expand_poly, expand_rle,
+ resize_poly, resize_rle, dehaze, select_bands, to_intensity, to_uint8,
+ img_flip, img_simple_rotate, decode_seg_mask)
__all__ = [
- "Compose",
- "DecodeImg",
- "Resize",
- "RandomResize",
- "ResizeByShort",
- "RandomResizeByShort",
- "ResizeByLong",
- "RandomHorizontalFlip",
- "RandomVerticalFlip",
- "Normalize",
- "CenterCrop",
- "RandomCrop",
- "RandomScaleAspect",
- "RandomExpand",
- "Pad",
- "MixupImage",
- "RandomDistort",
- "RandomBlur",
- "RandomSwap",
- "Dehaze",
- "ReduceDim",
- "SelectBand",
- "ArrangeSegmenter",
- "ArrangeChangeDetector",
- "ArrangeClassifier",
- "ArrangeDetector",
- "RandomFlipOrRotate",
+ "Compose", "DecodeImg", "Resize", "RandomResize", "ResizeByShort",
+ "RandomResizeByShort", "ResizeByLong", "RandomHorizontalFlip",
+ "RandomVerticalFlip", "Normalize", "CenterCrop", "RandomCrop",
+ "RandomScaleAspect", "RandomExpand", "Pad", "MixupImage", "RandomDistort",
+ "RandomBlur", "RandomSwap", "Dehaze", "ReduceDim", "SelectBand",
+ "ArrangeSegmenter", "ArrangeChangeDetector", "ArrangeClassifier",
+ "ArrangeDetector", "RandomFlipOrRotate", "ReloadMask"
]
interp_dict = {
@@ -74,6 +56,71 @@
}
+class Compose(object):
+ """
+ Apply a series of data augmentation strategies to the input.
+ All input images should be in Height-Width-Channel ([H, W, C]) format.
+
+ Args:
+ transforms (list[paddlers.transforms.Transform]): List of data preprocess or
+ augmentation operators.
+
+ Raises:
+ TypeError: Invalid type of transforms.
+ ValueError: Invalid length of transforms.
+ """
+
+ def __init__(self, transforms):
+ super(Compose, self).__init__()
+ if not isinstance(transforms, list):
+ raise TypeError(
+ "Type of transforms is invalid. Must be a list, but received is {}."
+ .format(type(transforms)))
+ if len(transforms) < 1:
+ raise ValueError(
+ "Length of transforms must not be less than 1, but received is {}."
+ .format(len(transforms)))
+ transforms = copy.deepcopy(transforms)
+ self.arrange = self._pick_arrange(transforms)
+ self.transforms = transforms
+
+ def __call__(self, sample):
+ """
+ This is equivalent to sequentially calling compose_obj.apply_transforms()
+ and compose_obj.arrange_outputs().
+ """
+
+ sample = self.apply_transforms(sample)
+ sample = self.arrange_outputs(sample)
+ return sample
+
+ def apply_transforms(self, sample):
+ for op in self.transforms:
+ # Skip batch transforms amd mixup
+ if isinstance(op, (paddlers.transforms.BatchRandomResize,
+ paddlers.transforms.BatchRandomResizeByShort,
+ MixupImage)):
+ continue
+ sample = op(sample)
+ return sample
+
+ def arrange_outputs(self, sample):
+ if self.arrange is not None:
+ sample = self.arrange(sample)
+ return sample
+
+ def _pick_arrange(self, transforms):
+ arrange = None
+ for idx, op in enumerate(transforms):
+ if isinstance(op, Arrange):
+ if idx != len(transforms) - 1:
+ raise ValueError(
+ "Arrange operator must be placed at the end of the list."
+ )
+ arrange = transforms.pop(idx)
+ return arrange
+
+
class Transform(object):
"""
Parent class of all data augmentation operations
@@ -178,14 +225,14 @@ def read_img(self, img_path):
elif ext == '.npy':
return np.load(img_path)
else:
- raise TypeError('Image format {} is not supported!'.format(ext))
+ raise TypeError("Image format {} is not supported!".format(ext))
def apply_im(self, im_path):
if isinstance(im_path, str):
try:
image = self.read_img(im_path)
except:
- raise ValueError('Cannot read the image file {}!'.format(
+ raise ValueError("Cannot read the image file {}!".format(
im_path))
else:
image = im_path
@@ -217,7 +264,9 @@ def apply(self, sample):
Returns:
dict: Decoded sample.
"""
+
if 'image' in sample:
+ sample['image_ori'] = copy.deepcopy(sample['image'])
sample['image'] = self.apply_im(sample['image'])
if 'image2' in sample:
sample['image2'] = self.apply_im(sample['image2'])
@@ -227,6 +276,7 @@ def apply(self, sample):
sample['image'] = self.apply_im(sample['image_t1'])
sample['image2'] = self.apply_im(sample['image_t2'])
if 'mask' in sample:
+ sample['mask_ori'] = copy.deepcopy(sample['mask'])
sample['mask'] = self.apply_mask(sample['mask'])
im_height, im_width, _ = sample['image'].shape
se_height, se_width = sample['mask'].shape
@@ -245,61 +295,6 @@ def apply(self, sample):
return sample
-class Compose(Transform):
- """
- Apply a series of data augmentation to the input.
- All input images are in Height-Width-Channel ([H, W, C]) format.
-
- Args:
- transforms (list[paddlers.transforms.Transform]): List of data preprocess or augmentations.
- Raises:
- TypeError: Invalid type of transforms.
- ValueError: Invalid length of transforms.
- """
-
- def __init__(self, transforms, to_uint8=True):
- super(Compose, self).__init__()
- if not isinstance(transforms, list):
- raise TypeError(
- 'Type of transforms is invalid. Must be a list, but received is {}'
- .format(type(transforms)))
- if len(transforms) < 1:
- raise ValueError(
- 'Length of transforms must not be less than 1, but received is {}'
- .format(len(transforms)))
- self.transforms = transforms
- self.decode_image = DecodeImg(to_uint8=to_uint8)
- self.arrange_outputs = None
- self.apply_im_only = False
-
- def __call__(self, sample):
- if self.apply_im_only:
- if 'mask' in sample:
- mask_backup = copy.deepcopy(sample['mask'])
- del sample['mask']
- if 'aux_masks' in sample:
- aux_masks = copy.deepcopy(sample['aux_masks'])
-
- sample = self.decode_image(sample)
-
- for op in self.transforms:
- # skip batch transforms amd mixup
- if isinstance(op, (paddlers.transforms.BatchRandomResize,
- paddlers.transforms.BatchRandomResizeByShort,
- MixupImage)):
- continue
- sample = op(sample)
-
- if self.arrange_outputs is not None:
- if self.apply_im_only:
- sample['mask'] = mask_backup
- if 'aux_masks' in locals():
- sample['aux_masks'] = aux_masks
- sample = self.arrange_outputs(sample)
-
- return sample
-
-
class Resize(Transform):
"""
Resize input.
@@ -324,7 +319,7 @@ class Resize(Transform):
def __init__(self, target_size, interp='LINEAR', keep_ratio=False):
super(Resize, self).__init__()
if not (interp == "RANDOM" or interp in interp_dict):
- raise ValueError("interp should be one of {}".format(
+ raise ValueError("`interp` should be one of {}.".format(
interp_dict.keys()))
if isinstance(target_size, int):
target_size = (target_size, target_size)
@@ -332,7 +327,7 @@ def __init__(self, target_size, interp='LINEAR', keep_ratio=False):
if not (isinstance(target_size,
(list, tuple)) and len(target_size) == 2):
raise TypeError(
- "target_size should be an int or a list of length 2, but received {}".
+ "`target_size` should be an int or a list of length 2, but received {}.".
format(target_size))
# (height, width)
self.target_size = target_size
@@ -444,11 +439,11 @@ class RandomResize(Transform):
def __init__(self, target_sizes, interp='LINEAR'):
super(RandomResize, self).__init__()
if not (interp == "RANDOM" or interp in interp_dict):
- raise ValueError("interp should be one of {}".format(
+ raise ValueError("`interp` should be one of {}.".format(
interp_dict.keys()))
self.interp = interp
assert isinstance(target_sizes, list), \
- "target_size must be a list."
+ "`target_size` must be a list."
for i, item in enumerate(target_sizes):
if isinstance(item, int):
target_sizes[i] = (item, item)
@@ -479,7 +474,7 @@ class ResizeByShort(Transform):
def __init__(self, short_size=256, max_size=-1, interp='LINEAR'):
if not (interp == "RANDOM" or interp in interp_dict):
- raise ValueError("interp should be one of {}".format(
+ raise ValueError("`interp` should be one of {}".format(
interp_dict.keys()))
super(ResizeByShort, self).__init__()
self.short_size = short_size
@@ -523,11 +518,11 @@ class RandomResizeByShort(Transform):
def __init__(self, short_sizes, max_size=-1, interp='LINEAR'):
super(RandomResizeByShort, self).__init__()
if not (interp == "RANDOM" or interp in interp_dict):
- raise ValueError("interp should be one of {}".format(
+ raise ValueError("`interp` should be one of {}".format(
interp_dict.keys()))
self.interp = interp
assert isinstance(short_sizes, list), \
- "short_sizes must be a list."
+ "`short_sizes` must be a list."
self.short_sizes = short_sizes
self.max_size = max_size
@@ -575,6 +570,7 @@ class RandomFlipOrRotate(Transform):
# 定义数据增强
train_transforms = T.Compose([
+ T.DecodeImg(),
T.RandomFlipOrRotate(
probs = [0.3, 0.2] # 进行flip增强的概率是0.3,进行rotate增强的概率是0.2,不变的概率是0.5
probsf = [0.3, 0.25, 0, 0, 0] # flip增强时,使用水平flip、垂直flip的概率分别是0.3、0.25,水平且垂直flip、对角线flip、反对角线flip概率均为0,不变的概率是0.45
@@ -610,12 +606,12 @@ def apply_mask(self, mask, mode_id, flip_mode=True):
def apply_bbox(self, bbox, mode_id, flip_mode=True):
raise TypeError(
- "Currently, `paddlers.transforms.RandomFlipOrRotate` is not available for object detection tasks."
+ "Currently, RandomFlipOrRotate is not available for object detection tasks."
)
def apply_segm(self, bbox, mode_id, flip_mode=True):
raise TypeError(
- "Currently, `paddlers.transforms.RandomFlipOrRotate` is not available for object detection tasks."
+ "Currently, RandomFlipOrRotate is not available for object detection tasks."
)
def get_probs_range(self, probs):
@@ -846,11 +842,11 @@ def __init__(self,
from functools import reduce
if reduce(lambda x, y: x * y, std) == 0:
raise ValueError(
- 'Std should not contain 0, but received is {}.'.format(std))
+ "`std` should not contain 0, but received is {}.".format(std))
if reduce(lambda x, y: x * y,
[a - b for a, b in zip(max_val, min_val)]) == 0:
raise ValueError(
- '(max_val - min_val) should not contain 0, but received is {}.'.
+ "(`max_val` - `min_val`) should not contain 0, but received is {}.".
format((np.asarray(max_val) - np.asarray(min_val)).tolist()))
self.mean = mean
@@ -1154,11 +1150,11 @@ def __init__(self,
im_padding_value=127.5,
label_padding_value=255):
super(RandomExpand, self).__init__()
- assert upper_ratio > 1.01, "expand ratio must be larger than 1.01"
+ assert upper_ratio > 1.01, "`upper_ratio` must be larger than 1.01."
self.upper_ratio = upper_ratio
self.prob = prob
assert isinstance(im_padding_value, (Number, Sequence)), \
- "fill value must be either float or sequence"
+ "Value to fill must be either float or sequence."
self.im_padding_value = im_padding_value
self.label_padding_value = label_padding_value
@@ -1205,16 +1201,16 @@ def __init__(self,
if isinstance(target_size, (list, tuple)):
if len(target_size) != 2:
raise ValueError(
- '`target_size` should include 2 elements, but it is {}'.
+ "`target_size` should contain 2 elements, but it is {}.".
format(target_size))
if isinstance(target_size, int):
target_size = [target_size] * 2
assert pad_mode in [
-1, 0, 1, 2
- ], 'currently only supports four modes [-1, 0, 1, 2]'
+ ], "Currently only four modes are supported: [-1, 0, 1, 2]."
if pad_mode == -1:
- assert offsets, 'if pad_mode is -1, offsets should not be None'
+ assert offsets, "if `pad_mode` is -1, `offsets` should not be None."
self.target_size = target_size
self.size_divisor = size_divisor
@@ -1315,9 +1311,9 @@ def __init__(self, alpha=1.5, beta=1.5, mixup_epoch=-1):
"""
super(MixupImage, self).__init__()
if alpha <= 0.0:
- raise ValueError("alpha should be positive in {}".format(self))
+ raise ValueError("`alpha` should be positive in MixupImage.")
if beta <= 0.0:
- raise ValueError("beta should be positive in {}".format(self))
+ raise ValueError("`beta` should be positive in MixupImage.")
self.alpha = alpha
self.beta = beta
self.mixup_epoch = mixup_epoch
@@ -1754,55 +1750,56 @@ def __init__(self, prob=0.2):
def apply(self, sample):
if 'image2' not in sample:
- raise ValueError('image2 is not found in the sample.')
+ raise ValueError("'image2' is not found in the sample.")
if random.random() < self.prob:
sample['image'], sample['image2'] = sample['image2'], sample[
'image']
return sample
-class ArrangeSegmenter(Transform):
+class ReloadMask(Transform):
+ def apply(self, sample):
+ sample['mask'] = decode_seg_mask(sample['mask_ori'])
+ if 'aux_masks' in sample:
+ sample['aux_masks'] = list(
+ map(decode_seg_mask, sample['aux_masks_ori']))
+ return sample
+
+
+class Arrange(Transform):
def __init__(self, mode):
- super(ArrangeSegmenter, self).__init__()
+ super().__init__()
if mode not in ['train', 'eval', 'test', 'quant']:
raise ValueError(
- "mode should be defined as one of ['train', 'eval', 'test', 'quant']!"
+ "`mode` should be defined as one of ['train', 'eval', 'test', 'quant']!"
)
self.mode = mode
+
+class ArrangeSegmenter(Arrange):
def apply(self, sample):
if 'mask' in sample:
mask = sample['mask']
+ mask = mask.astype('int64')
image = permute(sample['image'], False)
if self.mode == 'train':
- mask = mask.astype('int64')
return image, mask
if self.mode == 'eval':
- mask = np.asarray(Image.open(mask))
- mask = mask[np.newaxis, :, :].astype('int64')
return image, mask
if self.mode == 'test':
return image,
-class ArrangeChangeDetector(Transform):
- def __init__(self, mode):
- super(ArrangeChangeDetector, self).__init__()
- if mode not in ['train', 'eval', 'test', 'quant']:
- raise ValueError(
- "mode should be defined as one of ['train', 'eval', 'test', 'quant']!"
- )
- self.mode = mode
-
+class ArrangeChangeDetector(Arrange):
def apply(self, sample):
if 'mask' in sample:
mask = sample['mask']
+ mask = mask.astype('int64')
image_t1 = permute(sample['image'], False)
image_t2 = permute(sample['image2'], False)
if self.mode == 'train':
- mask = mask.astype('int64')
masks = [mask]
if 'aux_masks' in sample:
masks.extend(
@@ -1811,22 +1808,12 @@ def apply(self, sample):
image_t1,
image_t2, ) + tuple(masks)
if self.mode == 'eval':
- mask = np.asarray(Image.open(mask))
- mask = mask[np.newaxis, :, :].astype('int64')
return image_t1, image_t2, mask
if self.mode == 'test':
return image_t1, image_t2,
-class ArrangeClassifier(Transform):
- def __init__(self, mode):
- super(ArrangeClassifier, self).__init__()
- if mode not in ['train', 'eval', 'test', 'quant']:
- raise ValueError(
- "mode should be defined as one of ['train', 'eval', 'test', 'quant']!"
- )
- self.mode = mode
-
+class ArrangeClassifier(Arrange):
def apply(self, sample):
image = permute(sample['image'], False)
if self.mode in ['train', 'eval']:
@@ -1835,15 +1822,7 @@ def apply(self, sample):
return image
-class ArrangeDetector(Transform):
- def __init__(self, mode):
- super(ArrangeDetector, self).__init__()
- if mode not in ['train', 'eval', 'test', 'quant']:
- raise ValueError(
- "mode should be defined as one of ['train', 'eval', 'test', 'quant']!"
- )
- self.mode = mode
-
+class ArrangeDetector(Arrange):
def apply(self, sample):
if self.mode == 'eval' and 'gt_poly' in sample:
del sample['gt_poly']
diff --git a/paddlers/utils/__init__.py b/paddlers/utils/__init__.py
index 842e5331..950ea735 100644
--- a/paddlers/utils/__init__.py
+++ b/paddlers/utils/__init__.py
@@ -15,8 +15,8 @@
from . import logging
from . import utils
from .utils import (seconds_to_hms, get_encoding, get_single_card_bs, dict2str,
- EarlyStop, path_normalization, is_pic, MyEncoder,
- DisablePrint, Timer)
+ EarlyStop, norm_path, is_pic, MyEncoder, DisablePrint,
+ Timer)
from .checkpoint import get_pretrain_weights, load_pretrain_weights, load_checkpoint
from .env import get_environ_info, get_num_workers, init_parallel_env
from .download import download_and_decompress, decompress
diff --git a/paddlers/utils/utils.py b/paddlers/utils/utils.py
index 711f2868..5756530d 100644
--- a/paddlers/utils/utils.py
+++ b/paddlers/utils/utils.py
@@ -69,7 +69,7 @@ def dict2str(dict_input):
return out.strip(', ')
-def path_normalization(path):
+def norm_path(path):
win_sep = "\\"
other_sep = "/"
if platform.system() == "Windows":
diff --git a/setup.py b/setup.py
index 258eca06..95a3ecfb 100644
--- a/setup.py
+++ b/setup.py
@@ -31,7 +31,7 @@
description=DESCRIPTION,
long_description=LONG_DESCRIPTION,
long_description_content_type="text/plain",
- url="https://github.com/PaddleCV-SIG/PaddleRS",
+ url="https://github.com/PaddlePaddle/PaddleRS",
packages=setuptools.find_packages(),
python_requires='>=3.7',
setup_requires=['cython', 'numpy'],
diff --git a/tests/data/data_utils.py b/tests/data/data_utils.py
index b0d421ba..5a8b3d45 100644
--- a/tests/data/data_utils.py
+++ b/tests/data/data_utils.py
@@ -325,7 +325,7 @@ def _parse_coco_files(self, im_dir, ann_path):
def build_input_from_file(file_list, prefix='', task='auto', label_list=None):
"""
- Construct a list of dictionaries from file. Each dict in the list can be used as the input to `paddlers.transforms.Transform` objects.
+ Construct a list of dictionaries from file. Each dict in the list can be used as the input to paddlers.transforms.Transform objects.
Args:
file_list (str): Path of file_list.
diff --git a/tests/deploy/test_predictor.py b/tests/deploy/test_predictor.py
index 141556b6..62839516 100644
--- a/tests/deploy/test_predictor.py
+++ b/tests/deploy/test_predictor.py
@@ -120,7 +120,10 @@ def check_predictor(self, predictor, trainer):
t2_path = "data/ssmt/optical_t2.bmp"
single_input = (t1_path, t2_path)
num_inputs = 2
- transforms = pdrs.transforms.Compose([pdrs.transforms.Normalize()])
+ transforms = pdrs.transforms.Compose([
+ pdrs.transforms.DecodeImg(), pdrs.transforms.Normalize(),
+ pdrs.transforms.ArrangeChangeDetector('test')
+ ])
# Expected failure
with self.assertRaises(ValueError):
@@ -184,7 +187,10 @@ class TestClasPredictor(TestPredictor):
def check_predictor(self, predictor, trainer):
single_input = "data/ssmt/optical_t1.bmp"
num_inputs = 2
- transforms = pdrs.transforms.Compose([pdrs.transforms.Normalize()])
+ transforms = pdrs.transforms.Compose([
+ pdrs.transforms.DecodeImg(), pdrs.transforms.Normalize(),
+ pdrs.transforms.ArrangeClassifier('test')
+ ])
labels = list(range(2))
trainer.labels = labels
predictor._model.labels = labels
@@ -249,7 +255,10 @@ def check_predictor(self, predictor, trainer):
# given that the network is (partially?) randomly initialized.
single_input = "data/ssmt/optical_t1.bmp"
num_inputs = 2
- transforms = pdrs.transforms.Compose([pdrs.transforms.Normalize()])
+ transforms = pdrs.transforms.Compose([
+ pdrs.transforms.DecodeImg(), pdrs.transforms.Normalize(),
+ pdrs.transforms.ArrangeDetector('test')
+ ])
labels = list(range(80))
trainer.labels = labels
predictor._model.labels = labels
@@ -303,7 +312,10 @@ class TestSegPredictor(TestPredictor):
def check_predictor(self, predictor, trainer):
single_input = "data/ssmt/optical_t1.bmp"
num_inputs = 2
- transforms = pdrs.transforms.Compose([pdrs.transforms.Normalize()])
+ transforms = pdrs.transforms.Compose([
+ pdrs.transforms.DecodeImg(), pdrs.transforms.Normalize(),
+ pdrs.transforms.ArrangeSegmenter('test')
+ ])
# Single input (file path)
input_ = single_input
diff --git a/tests/download_test_data.sh b/tests/download_test_data.sh
index f672acd3..5fbd14af 100644
--- a/tests/download_test_data.sh
+++ b/tests/download_test_data.sh
@@ -4,7 +4,7 @@ function remove_dir_if_exist() {
local dir="$1"
if [ -d "${dir}" ]; then
rm -rf "${dir}"
- echo "\033[0;31mDirectory ${dir} has been removed.\033[0m"
+ echo -e "\033[0;31mDirectory ${dir} has been removed.\033[0m"
fi
}
diff --git a/tests/test_tutorials.py b/tests/test_tutorials.py
index 48535274..f3fa67ae 100644
--- a/tests/test_tutorials.py
+++ b/tests/test_tutorials.py
@@ -29,7 +29,7 @@ class TestTutorial(CpuCommonTest):
@classmethod
def setUpClass(cls):
cls._td = tempfile.TemporaryDirectory(dir='./')
- # Recursively copy the content of `cls.SUBDIR` to td.
+ # Recursively copy the content of cls.SUBDIR to td.
# This is necessary for running scripts in td.
cls._TSUBDIR = osp.join(cls._td.name, osp.basename(cls.SUBDIR))
shutil.copytree(cls.SUBDIR, cls._TSUBDIR)
@@ -47,7 +47,7 @@ def add_tests(cls):
def _test_tutorial(script_name):
def _test_tutorial_impl(self):
- # Set working directory to `cls._TSUBDIR` such that the
+ # Set working directory to cls._TSUBDIR such that the
# files generated by the script will be automatically cleaned.
run_script(f"python {script_name}", wd=cls._TSUBDIR)
diff --git a/tools/raster2geotiff.py b/tools/raster2geotiff.py
index 63b980d7..d7614c21 100644
--- a/tools/raster2geotiff.py
+++ b/tools/raster2geotiff.py
@@ -46,7 +46,7 @@ def convert_data(image_path, geojson_path):
geo_points = geo["coordinates"][0][0]
else:
raise TypeError(
- "Geometry type must be `Polygon` or `MultiPolygon`, not {}.".
+ "Geometry type must be 'Polygon' or 'MultiPolygon', not {}.".
format(geo["type"]))
xy_points = np.array([
_gt_convert(point[0], point[1], raster.geot) for point in geo_points
diff --git a/tools/raster2vector.py b/tools/raster2vector.py
index 99bdbc3c..a76ea92d 100644
--- a/tools/raster2vector.py
+++ b/tools/raster2vector.py
@@ -76,7 +76,7 @@ def raster2vector(srcimg_path, mask_path, save_path, ignore_index=255):
vec_ext = save_path.split(".")[-1].lower()
if vec_ext not in ["json", "geojson", "shp"]:
raise ValueError(
- "The ext of `save_path` must be `json/geojson` or `shp`, not {}.".
+ "The extension of `save_path` must be 'json/geojson' or 'shp', not {}.".
format(vec_ext))
ras_ext = srcimg_path.split(".")[-1].lower()
if osp.exists(srcimg_path) and ras_ext in ["tif", "tiff", "geotiff", "img"]:
@@ -93,7 +93,7 @@ def raster2vector(srcimg_path, mask_path, save_path, ignore_index=255):
parser.add_argument("--mask_path", type=str, required=True, \
help="Path of mask data.")
parser.add_argument("--save_path", type=str, required=True, \
- help="Path to save the shape file (the file suffix is `.json/geojson` or `.shp`).")
+ help="Path to save the shape file (the extension is .json/geojson or .shp).")
parser.add_argument("--srcimg_path", type=str, default="", \
help="Path of original data with geoinfo. Default to empty.")
parser.add_argument("--ignore_index", type=int, default=255, \
diff --git a/tools/split.py b/tools/split.py
index 6a2acb9b..8d6ce315 100644
--- a/tools/split.py
+++ b/tools/split.py
@@ -75,7 +75,7 @@ def split_data(image_path, mask_path, block_size, save_dir):
parser.add_argument("--block_size", type=int, default=512, \
help="Size of image block. Default value is 512.")
parser.add_argument("--save_dir", type=str, default="dataset", \
- help="Directory to save the results. Default value is `dataset`.")
+ help="Directory to save the results. Default value is 'dataset'.")
if __name__ == "__main__":
args = parser.parse_args()
diff --git a/tools/utils/raster.py b/tools/utils/raster.py
index 3fb5ae48..e73f5161 100644
--- a/tools/utils/raster.py
+++ b/tools/utils/raster.py
@@ -42,7 +42,7 @@ def _get_type(type_name: str) -> int:
elif type_name == "complex64":
gdal_type = gdal.GDT_CFloat64
else:
- raise TypeError("Non-suported data type `{}`.".format(type_name))
+ raise TypeError("Non-suported data type {}.".format(type_name))
return gdal_type
@@ -76,7 +76,7 @@ def __init__(self,
# https://www.osgeo.cn/gdal/drivers/raster/index.html
self._src_data = gdal.Open(path)
except:
- raise TypeError("Unsupported data format: `{}`".format(
+ raise TypeError("Unsupported data format: {}".format(
self.ext_type))
else:
raise ValueError("The path {0} not exists.".format(path))
diff --git a/tutorials/train/README.md b/tutorials/train/README.md
index 4b5b3d70..03286079 100644
--- a/tutorials/train/README.md
+++ b/tutorials/train/README.md
@@ -43,7 +43,7 @@
PaddleRS代码会跟随开发进度不断更新,可以安装develop分支的代码使用最新的功能,安装方式如下:
```
-git clone https://github.com/PaddleCV-SIG/PaddleRS
+git clone https://github.com/PaddlePaddle/PaddleRS
cd PaddleRS
git checkout develop
pip install -r requirements.txt
diff --git a/tutorials/train/change_detection/bit.py b/tutorials/train/change_detection/bit.py
index 9ee72f52..e550d19a 100644
--- a/tutorials/train/change_detection/bit.py
+++ b/tutorials/train/change_detection/bit.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建BIT模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.BIT()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.BIT()
# 执行模型训练
model.train(
diff --git a/tutorials/train/change_detection/cdnet.py b/tutorials/train/change_detection/cdnet.py
index d8f38eec..142919be 100644
--- a/tutorials/train/change_detection/cdnet.py
+++ b/tutorials/train/change_detection/cdnet.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建CDNet模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.CDNet()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.CDNet()
# 执行模型训练
model.train(
diff --git a/tutorials/train/change_detection/dsamnet.py b/tutorials/train/change_detection/dsamnet.py
index 380bbc8d..3a337a68 100644
--- a/tutorials/train/change_detection/dsamnet.py
+++ b/tutorials/train/change_detection/dsamnet.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建DSAMNet模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.DSAMNet()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.DSAMNet()
# 执行模型训练
model.train(
diff --git a/tutorials/train/change_detection/dsifn.py b/tutorials/train/change_detection/dsifn.py
index ef7183c3..6f0d11b2 100644
--- a/tutorials/train/change_detection/dsifn.py
+++ b/tutorials/train/change_detection/dsifn.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建DSIFN模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.DSIFN()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.DSIFN()
# 执行模型训练
model.train(
diff --git a/tutorials/train/change_detection/fc_ef.py b/tutorials/train/change_detection/fc_ef.py
index c3b2c3ce..af5ca9c0 100644
--- a/tutorials/train/change_detection/fc_ef.py
+++ b/tutorials/train/change_detection/fc_ef.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建FC-EF模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.FCEarlyFusion()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.FCEarlyFusion()
# 执行模型训练
model.train(
diff --git a/tutorials/train/change_detection/fc_siam_conc.py b/tutorials/train/change_detection/fc_siam_conc.py
index 46a5734a..71c8f0e9 100644
--- a/tutorials/train/change_detection/fc_siam_conc.py
+++ b/tutorials/train/change_detection/fc_siam_conc.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建FC-Siam-conc模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.FCSiamConc()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.FCSiamConc()
# 执行模型训练
model.train(
diff --git a/tutorials/train/change_detection/fc_siam_diff.py b/tutorials/train/change_detection/fc_siam_diff.py
index 0a85001f..22f49119 100644
--- a/tutorials/train/change_detection/fc_siam_diff.py
+++ b/tutorials/train/change_detection/fc_siam_diff.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建FC-Siam-diff模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.FCSiamDiff()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.FCSiamDiff()
# 执行模型训练
model.train(
diff --git a/tutorials/train/change_detection/snunet.py b/tutorials/train/change_detection/snunet.py
index cd2004a0..fdc9411d 100644
--- a/tutorials/train/change_detection/snunet.py
+++ b/tutorials/train/change_detection/snunet.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建SNUNet模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.SNUNet()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.SNUNet()
# 执行模型训练
model.train(
diff --git a/tutorials/train/change_detection/stanet.py b/tutorials/train/change_detection/stanet.py
index f88c9745..fea77ac6 100644
--- a/tutorials/train/change_detection/stanet.py
+++ b/tutorials/train/change_detection/stanet.py
@@ -21,8 +21,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 随机裁剪
T.RandomCrop(
# 裁剪区域将被缩放到256x256
@@ -36,12 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeChangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ReloadMask(),
+ T.ArrangeChangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -66,9 +72,9 @@
binarize_labels=True)
# 使用默认参数构建STANet模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
-model = pdrs.tasks.cd.STANet()
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/change_detector.py
+model = pdrs.tasks.STANet()
# 执行模型训练
model.train(
diff --git a/tutorials/train/classification/condensenetv2_b_rs_mul.py b/tutorials/train/classification/condensenetv2_b_rs_mul.py
index 21082ddf..d43689cd 100644
--- a/tutorials/train/classification/condensenetv2_b_rs_mul.py
+++ b/tutorials/train/classification/condensenetv2_b_rs_mul.py
@@ -3,18 +3,21 @@
# 定义训练和验证时的transforms
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
T.SelectBand([5, 10, 15, 20, 25]), # for tet
T.Resize(target_size=224),
T.RandomHorizontalFlip(),
T.Normalize(
mean=[0.5, 0.5, 0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5, 0.5, 0.5]),
+ T.ArrangeClassifier('train')
])
eval_transforms = T.Compose([
- T.SelectBand([5, 10, 15, 20, 25]),
- T.Resize(target_size=224),
+ T.DecodeImg(), T.SelectBand([5, 10, 15, 20, 25]), T.Resize(target_size=224),
T.Normalize(
- mean=[0.5, 0.5, 0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5, 0.5, 0.5]),
+ mean=[0.5, 0.5, 0.5, 0.5, 0.5],
+ std=[0.5, 0.5, 0.5, 0.5, 0.5]), T.ArrangeClassifier('eval')
])
# 定义训练和验证所用的数据集
diff --git a/tutorials/train/classification/hrnet.py b/tutorials/train/classification/hrnet.py
index 6a74d443..5c3ff68e 100644
--- a/tutorials/train/classification/hrnet.py
+++ b/tutorials/train/classification/hrnet.py
@@ -25,8 +25,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 将影像缩放到256x256大小
T.Resize(target_size=256),
# 以50%的概率实施随机水平翻转
@@ -36,13 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeClassifier('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
T.Resize(target_size=256),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeClassifier('eval')
])
# 分别构建训练和验证所用的数据集
@@ -63,9 +68,9 @@
shuffle=False)
# 使用默认参数构建HRNet模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/classifier.py
-model = pdrs.tasks.clas.HRNet_W18_C(num_classes=len(train_dataset.labels))
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/classifier.py
+model = pdrs.tasks.HRNet_W18_C(num_classes=len(train_dataset.labels))
# 执行模型训练
model.train(
diff --git a/tutorials/train/classification/mobilenetv3.py b/tutorials/train/classification/mobilenetv3.py
index ce187d31..80bd5392 100644
--- a/tutorials/train/classification/mobilenetv3.py
+++ b/tutorials/train/classification/mobilenetv3.py
@@ -25,8 +25,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 将影像缩放到256x256大小
T.Resize(target_size=256),
# 以50%的概率实施随机水平翻转
@@ -36,13 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeClassifier('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
T.Resize(target_size=256),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeClassifier('eval')
])
# 分别构建训练和验证所用的数据集
@@ -63,10 +68,9 @@
shuffle=False)
# 使用默认参数构建MobileNetV3模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/classifier.py
-model = pdrs.tasks.clas.MobileNetV3_small_x1_0(
- num_classes=len(train_dataset.labels))
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/classifier.py
+model = pdrs.tasks.MobileNetV3_small_x1_0(num_classes=len(train_dataset.labels))
# 执行模型训练
model.train(
diff --git a/tutorials/train/classification/resnet50_vd.py b/tutorials/train/classification/resnet50_vd.py
index 13fbd33c..57b14f78 100644
--- a/tutorials/train/classification/resnet50_vd.py
+++ b/tutorials/train/classification/resnet50_vd.py
@@ -25,8 +25,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 将影像缩放到256x256大小
T.Resize(target_size=256),
# 以50%的概率实施随机水平翻转
@@ -36,13 +38,16 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeClassifier('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
T.Resize(target_size=256),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ T.ArrangeClassifier('eval')
])
# 分别构建训练和验证所用的数据集
@@ -63,9 +68,9 @@
shuffle=False)
# 使用默认参数构建ResNet50-vd模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/classifier.py
-model = pdrs.tasks.clas.ResNet50_vd(num_classes=len(train_dataset.labels))
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/classifier.py
+model = pdrs.tasks.ResNet50_vd(num_classes=len(train_dataset.labels))
# 执行模型训练
model.train(
diff --git a/tutorials/train/object_detection/faster_rcnn.py b/tutorials/train/object_detection/faster_rcnn.py
index bf2ddd21..ad2f668e 100644
--- a/tutorials/train/object_detection/faster_rcnn.py
+++ b/tutorials/train/object_detection/faster_rcnn.py
@@ -28,8 +28,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 对输入影像施加随机色彩扰动
T.RandomDistort(),
# 在影像边界进行随机padding
@@ -44,16 +46,19 @@
interp='RANDOM'),
# 影像归一化
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 使用双三次插值将输入影像缩放到固定大小
T.Resize(
target_size=608, interp='CUBIC'),
# 验证阶段与训练阶段的归一化方式必须相同
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -72,9 +77,9 @@
shuffle=False)
# 构建Faster R-CNN模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
-model = pdrs.tasks.det.FasterRCNN(num_classes=len(train_dataset.labels))
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
+model = pdrs.tasks.FasterRCNN(num_classes=len(train_dataset.labels))
# 执行模型训练
model.train(
diff --git a/tutorials/train/object_detection/ppyolo.py b/tutorials/train/object_detection/ppyolo.py
index 6b925aed..bdb8fe82 100644
--- a/tutorials/train/object_detection/ppyolo.py
+++ b/tutorials/train/object_detection/ppyolo.py
@@ -29,8 +29,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 对输入影像施加随机色彩扰动
T.RandomDistort(),
# 在影像边界进行随机padding
@@ -45,16 +47,19 @@
interp='RANDOM'),
# 影像归一化
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 使用双三次插值将输入影像缩放到固定大小
T.Resize(
target_size=608, interp='CUBIC'),
# 验证阶段与训练阶段的归一化方式必须相同
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -73,9 +78,9 @@
shuffle=False)
# 构建PP-YOLO模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
-model = pdrs.tasks.det.PPYOLO(num_classes=len(train_dataset.labels))
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
+model = pdrs.tasks.PPYOLO(num_classes=len(train_dataset.labels))
# 执行模型训练
model.train(
diff --git a/tutorials/train/object_detection/ppyolotiny.py b/tutorials/train/object_detection/ppyolotiny.py
index 37b72c89..bbe20661 100644
--- a/tutorials/train/object_detection/ppyolotiny.py
+++ b/tutorials/train/object_detection/ppyolotiny.py
@@ -29,8 +29,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 对输入影像施加随机色彩扰动
T.RandomDistort(),
# 在影像边界进行随机padding
@@ -45,16 +47,19 @@
interp='RANDOM'),
# 影像归一化
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 使用双三次插值将输入影像缩放到固定大小
T.Resize(
target_size=608, interp='CUBIC'),
# 验证阶段与训练阶段的归一化方式必须相同
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -73,9 +78,9 @@
shuffle=False)
# 构建PP-YOLO Tiny模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
-model = pdrs.tasks.det.PPYOLOTiny(num_classes=len(train_dataset.labels))
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
+model = pdrs.tasks.PPYOLOTiny(num_classes=len(train_dataset.labels))
# 执行模型训练
model.train(
diff --git a/tutorials/train/object_detection/ppyolov2.py b/tutorials/train/object_detection/ppyolov2.py
index d031348d..933a478f 100644
--- a/tutorials/train/object_detection/ppyolov2.py
+++ b/tutorials/train/object_detection/ppyolov2.py
@@ -29,8 +29,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 对输入影像施加随机色彩扰动
T.RandomDistort(),
# 在影像边界进行随机padding
@@ -45,16 +47,19 @@
interp='RANDOM'),
# 影像归一化
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 使用双三次插值将输入影像缩放到固定大小
T.Resize(
target_size=608, interp='CUBIC'),
# 验证阶段与训练阶段的归一化方式必须相同
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -73,9 +78,9 @@
shuffle=False)
# 构建PP-YOLOv2模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
-model = pdrs.tasks.det.PPYOLOv2(num_classes=len(train_dataset.labels))
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
+model = pdrs.tasks.PPYOLOv2(num_classes=len(train_dataset.labels))
# 执行模型训练
model.train(
diff --git a/tutorials/train/object_detection/yolov3.py b/tutorials/train/object_detection/yolov3.py
index cc04b1fa..79b2b744 100644
--- a/tutorials/train/object_detection/yolov3.py
+++ b/tutorials/train/object_detection/yolov3.py
@@ -29,8 +29,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 对输入影像施加随机色彩扰动
T.RandomDistort(),
# 在影像边界进行随机padding
@@ -45,16 +47,19 @@
interp='RANDOM'),
# 影像归一化
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
# 使用双三次插值将输入影像缩放到固定大小
T.Resize(
target_size=608, interp='CUBIC'),
# 验证阶段与训练阶段的归一化方式必须相同
T.Normalize(
- mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
+ mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
+ T.ArrangeDetector('eval')
])
# 分别构建训练和验证所用的数据集
@@ -73,9 +78,9 @@
shuffle=False)
# 构建YOLOv3模型,使用DarkNet53作为backbone
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
-model = pdrs.tasks.det.YOLOv3(
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/object_detector.py
+model = pdrs.tasks.YOLOv3(
num_classes=len(train_dataset.labels), backbone='DarkNet53')
# 执行模型训练
diff --git a/tutorials/train/semantic_segmentation/deeplabv3p.py b/tutorials/train/semantic_segmentation/deeplabv3p.py
index fd5aaf7a..ed99ec07 100644
--- a/tutorials/train/semantic_segmentation/deeplabv3p.py
+++ b/tutorials/train/semantic_segmentation/deeplabv3p.py
@@ -28,8 +28,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 将影像缩放到512x512大小
T.Resize(target_size=512),
# 以50%的概率实施随机水平翻转
@@ -37,13 +39,17 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5] * NUM_BANDS, std=[0.5] * NUM_BANDS),
+ T.ArrangeSegmenter('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
T.Resize(target_size=512),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5] * NUM_BANDS, std=[0.5] * NUM_BANDS),
+ T.ReloadMask(),
+ T.ArrangeSegmenter('eval')
])
# 分别构建训练和验证所用的数据集
@@ -64,9 +70,9 @@
shuffle=False)
# 构建DeepLab V3+模型,使用ResNet-50作为backbone
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/segmenter.py
-model = pdrs.tasks.seg.DeepLabV3P(
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmenter.py
+model = pdrs.tasks.DeepLabV3P(
input_channel=NUM_BANDS,
num_classes=len(train_dataset.labels),
backbone='ResNet50_vd')
diff --git a/tutorials/train/semantic_segmentation/unet.py b/tutorials/train/semantic_segmentation/unet.py
index fb94105f..f938ff1f 100644
--- a/tutorials/train/semantic_segmentation/unet.py
+++ b/tutorials/train/semantic_segmentation/unet.py
@@ -28,8 +28,10 @@
# 定义训练和验证时使用的数据变换(数据增强、预处理等)
# 使用Compose组合多种变换方式。Compose中包含的变换将按顺序串行执行
-# API说明:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/transforms.md
+# API说明:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/transforms.md
train_transforms = T.Compose([
+ # 读取影像
+ T.DecodeImg(),
# 将影像缩放到512x512大小
T.Resize(target_size=512),
# 以50%的概率实施随机水平翻转
@@ -37,13 +39,17 @@
# 将数据归一化到[-1,1]
T.Normalize(
mean=[0.5] * NUM_BANDS, std=[0.5] * NUM_BANDS),
+ T.ArrangeSegmenter('train')
])
eval_transforms = T.Compose([
+ T.DecodeImg(),
T.Resize(target_size=512),
# 验证阶段与训练阶段的数据归一化方式必须相同
T.Normalize(
mean=[0.5] * NUM_BANDS, std=[0.5] * NUM_BANDS),
+ T.ReloadMask(),
+ T.ArrangeSegmenter('eval')
])
# 分别构建训练和验证所用的数据集
@@ -64,9 +70,9 @@
shuffle=False)
# 构建UNet模型
-# 目前已支持的模型请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/docs/apis/model_zoo.md
-# 模型输入参数请参考:https://github.com/PaddleCV-SIG/PaddleRS/blob/develop/paddlers/tasks/segmenter.py
-model = pdrs.tasks.seg.UNet(
+# 目前已支持的模型请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/docs/apis/model_zoo.md
+# 模型输入参数请参考:https://github.com/PaddlePaddle/PaddleRS/blob/develop/paddlers/tasks/segmenter.py
+model = pdrs.tasks.UNet(
input_channel=NUM_BANDS, num_classes=len(train_dataset.labels))
# 执行模型训练