Mask R-CNN ResNet50 Atrous trained on Common Objects in Context (COCO) dataset. It is used for object instance segmentation. For details, see the paper.
Metric | Value |
---|---|
Type | Instance segmentation |
GFlops | 294.738 |
MParams | 50.222 |
Source framework | TensorFlow* |
Metric | Value |
---|---|
coco_orig_precision | 29.75% |
coco_orig_segm_precision | 27.46% |
Image, name: image_tensor
, shape: 1, 800, 1365, 3
, format: B, H, W, C
, where:
B
- batch sizeH
- image heightW
- image widthC
- number of channels
Expected color order: RGB
.
-
Image, name:
image_tensor
, shape:1, 3, 800, 1365
, format:B, C, H, W
, where:B
- batch sizeC
- number of channelsH
- image heightW
- image width
Expected color order:
BGR
. -
Information of input image size, name:
image_info
, shape:1, 3
, format:B, C
, where:B
- batch sizeC
- vector of 3 values in formatH, W, S
, whereH
is an image height,W
is an image width,S
is an image scale factor (usually 1)
- Classifier, name:
detection_classes
. Contains predicted bounding-boxes classes in a range [1, 91]. The model was trained on Common Objects in Context (COCO) dataset version with 90 categories of objects, 0 class is for background. - Probability, name:
detection_scores
. Contains probability of detected bounding boxes. - Detection box, name:
detection_boxes
. Contains detection boxes coordinates in a format[y_min, x_min, y_max, x_max]
, where (x_min
,y_min
) are coordinates of the top left corner, (x_max
,y_max
) are coordinates of the right bottom corner. Coordinates are rescaled to input image size. - Detections number, name:
num_detections
. Contains the number of predicted detection boxes. - Segmentation mask, name:
detection_masks
. Contains segmentation heatmaps of detected objects for all classes for every output bounding box.
-
The array of summary detection information, name:
reshape_do_2d
, shape:100, 7
in the formatN, 7
, whereN
is the number of detected bounding boxes. For each detection, the description has the format: [image_id
,label
,conf
,x_min
,y_min
,x_max
,y_max
], where:image_id
- ID of the image in the batchlabel
- predicted class IDconf
- confidence for the predicted class- (
x_min
,y_min
) - coordinates of the top left bounding box corner (coordinates stored in normalized format, in range [0, 1]) - (
x_max
,y_max
) - coordinates of the bottom right bounding box corner (coordinates stored in normalized format, in range [0, 1])
-
Segmentation heatmaps for all classes for every output bounding box, name:
masks
, shape:100, 90, 33, 33
in the formatN, 90, 33, 33
, whereN
is the number of detected masks, 90 is the number of classes (the background class excluded).
You can download models and if necessary convert them into Inference Engine format using the Model Downloader and other automation tools as shown in the examples below.
An example of using the Model Downloader:
python3 <omz_dir>/tools/downloader/downloader.py --name <model_name>
An example of using the Model Converter:
python3 <omz_dir>/tools/downloader/converter.py --name <model_name>
The original model is distributed under the Apache License, Version 2.0. A copy of the license is provided in APACHE-2.0-TF-Models.txt.