Skip to content

Commit

Permalink
Updated READMEs
Browse files Browse the repository at this point in the history
  • Loading branch information
nv-jeff committed Mar 19, 2024
1 parent 6c1f49f commit b71f271
Show file tree
Hide file tree
Showing 7 changed files with 29 additions and 144 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
/weights
*.pth
*.pyc
*._
__pycache__
Expand Down
3 changes: 2 additions & 1 deletion evaluate/readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,8 @@ Only `--bop` is needed to be passed to load a bop scene. You can pass which scen

We assume that you have the intrinsics stored in the camera data. If you do not have them, the script uses 512 x 512 with a fov of 0.78. If the camera data is complete, like with NViSII data, it will use the camera intrinsics.

<!--
# TODO
- Make a `requirement.txt` file.
- Possibly subsamble vertices so computation is faster
<!-- - make a script to visualize the json files from DOPE -->
- make a script to visualize the json files from DOPE -->
2 changes: 1 addition & 1 deletion inference/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ The `inference.py` script will take a trained model to run inference. In order t
Below is an example of running inference:

```
python inference.py --weights ../output/weights --data ../sample_data --object cracker
python inference.py --weights ../weights --data ../sample_data --object cracker
```

### Configuration Files
Expand Down
69 changes: 19 additions & 50 deletions readme.md
Original file line number Diff line number Diff line change
@@ -1,79 +1,48 @@
[![License CC BY-NC-SA 4.0](https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-blue.svg)](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)
![Python 3.8](https://img.shields.io/badge/python-3.8-blue.svg)
# Deep Object Pose Estimation - ROS Inference
# Deep Object Pose Estimation

This is the official DOPE ROS package for detection and 6-DoF pose estimation of **known objects** from an RGB camera. The network has been trained on the following YCB objects: cracker box, sugar box, tomato soup can, mustard bottle, potted meat can, and gelatin box. For more details, see our [CoRL 2018 paper](https://arxiv.org/abs/1809.10790) and [video](https://youtu.be/yVGViBqWtBI).
This is the official DOPE ROS package for detection and 6-DoF pose estimation of **known objects** from an RGB camera. For full details, see our [CoRL 2018 paper](https://arxiv.org/abs/1809.10790) and [video](https://youtu.be/yVGViBqWtBI).


![DOPE Objects](dope_objects.png)

## Updates

2024/03/07 - New training code. New synthetic data generation code, using Blenderproc. Repo reorganization
## Contents
This repository contains complete code for [training](train), [inference](inference), numerical [evaluation](evaluate) of results, and synthetic [data generation](data_generation) using either [NVISII](https://github.com/owl-project/NVISII) or [Blenderproc](https://github.com/DLR-RM/BlenderProc). We also provide a [ROS1 Noetic package](ros1) that performs inference on images from a USB camera.

2022/07/13 - Added a script with a simple example for computing the ADD and ADD-S metric on data. Please refer to [script/metrics/](https://github.com/NVlabs/Deep_Object_Pose/tree/master/scripts/metrics).
Hardware-accelerated ROS2 inference can be done with the
[Isaac ROS DOPE](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation/tree/main/isaac_ros_dope) project.

2022/03/30 - Update on the NViSII script to handle [symmetrical objects](https://github.com/NVlabs/Deep_Object_Pose/tree/master/scripts/nvisii_data_gen#handling-objects-with-symmetries). Also the NViSII script is compatible with the original training script. Thanks to Martin Günther.

2021/12/13 - Added a NViSII script to generate synthetic data for training DOPE. See this [readme](https://github.com/NVlabs/Deep_Object_Pose/tree/master/scripts/nvisii_data_gen) for more details. We also added the update training and inference (without ROS) scripts for the NViSII paper [here](https://github.com/NVlabs/Deep_Object_Pose/tree/master/scripts/train2).

2021/10/20 - Added ROS2 Foxy inference support through [Isaac ROS DOPE package](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation) for Jetson and x86+CUDA-capable GPU.

2021/08/07 - Added publishing belief maps. Thank you to Martin Günther.

2020/03/09 - Added HOPE [weights to google drive](https://drive.google.com/open?id=1DfoA3m_Bm0fW8tOWXGVxi4ETlLEAgmcg), [the 3d models](https://drive.google.com/drive/folders/1jiJS9KgcYAkfb8KJPp5MRlB0P11BStft), and the objects dimensions to config. [Tremblay et al., IROS 2020](https://arxiv.org/abs/2008.11822). The HOPE dataset can be found [here](https://github.com/swtyree/hope-dataset/) and is also part of the [BOP challenge](https://bop.felk.cvut.cz/datasets/#HOPE)


<br>
<br>

## Tested Configurations

We have tested on Ubuntu 20.04 with ROS Noetic with an NVIDIA Titan X and RTX 2080ti with Python 3.8. The code may work on other systems.
We have tested our standalone training, inference and evaluation scripts on Ubuntu 20.04 and 22.04 with Python 3.8+, using an NVIDIA Titan X, 2080Ti, and Titan RTX.

---
***NOTE***

For hardware-accelerated ROS2 inference support, please visit [Isaac ROS DOPE](https://github.com/NVIDIA-ISAAC-ROS/isaac_ros_pose_estimation/tree/main/isaac_ros_dope) which has been tested with ROS2 Foxy on Jetson AGX Xavier/JetPack 4.6 and on x86/Ubuntu 20.04 with RTX3060i.
The ROS1 node has been tested with ROS Noetic using Python 3.10. The Isaac ROS2 DOPE node has been tested with ROS2 Foxy on Jetson AGX Xavier with JetPack 4.6; and on x86/Ubuntu 20.04 with a NVIDIA Titan X, 2080Ti, and Titan RTX.

---
<br>
<br>

## Synthetic Data Generation
Code and instructions for generating synthetic training data are found in the `data_generation` directory. There are two options for the render engine: you can use [NVISII](https://github.com/owl-project/NVISII) or [Blenderproc](https://github.com/DLR-RM/BlenderProc)
## Datasets

## Training
Code and instructions for training DOPE are found in the `train` directory.
We have trained and tested DOPE with two publicaly available datasets: YCB, and HOPE. The trained weights can be [downloaded from Google Drive](https://drive.google.com/drive/folders/1DfoA3m_Bm0fW8tOWXGVxi4ETlLEAgmcg).

## Inference
Code and instructions for command-line inference using PyTorch are found in the `inference` directory

## Evaluation
Code and instructions for evaluating the quality of your results are found in the `evaluate` directory

---
### YCB 3D Models
YCB models can be downloaded from the [YCB website](http://www.ycbbenchmarks.com/), or by using [NVDU](https://github.com/NVIDIA/Dataset_Utilities) (see the `nvdu_ycb` command).

## YCB 3D Models

DOPE returns the poses of the objects in the camera coordinate frame. DOPE uses the aligned YCB models, which can be obtained using [NVDU](https://github.com/NVIDIA/Dataset_Utilities) (see the `nvdu_ycb` command).
### HOPE 3D Models
The [HOPE dataset](https://github.com/swtyree/hope-dataset/) is a collection of RGBD images and video sequences with labeled 6-DoF poses for 28 toy grocery objects. The 3D models [can be downloaded here](https://drive.google.com/drive/folders/1jiJS9KgcYAkfb8KJPp5MRlB0P11BStft).
The folders are organized in the style of the YCB 3d models.

---
The physical objects can be purchased online (details and links to Amazon can be found in the [HOPE repository README](https://github.com/swtyree/hope-dataset/).

## HOPE 3D Models
<br><br>

![HOPE 3D models rendered in UE4](https://i.imgur.com/V6wX64p.png)
---

We introduce new toy 3d models that you download [here](https://drive.google.com/drive/folders/1jiJS9KgcYAkfb8KJPp5MRlB0P11BStft).
The folders are arranged like the YCB 3d models organization.
You can buy the real objects using the following links
[set 1](https://www.amazon.com/gp/product/B071ZMT9S2),
[set 2](https://www.amazon.com/gp/product/B007EA6PKS),
[set 3](https://www.amazon.com/gp/product/B00H4SKSPS),
and
[set 4](https://www.amazon.com/gp/product/B072M2PGX9).

The HOPE dataset can be found [here](https://github.com/swtyree/hope-dataset/) and is also part of the [BOP challenge](https://bop.felk.cvut.cz/datasets/#HOPE).

## How to cite DOPE

Expand All @@ -90,7 +59,7 @@ If you use this tool in a research project, please cite as follows:

## License

Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
Copyright (C) 2018-2024 NVIDIA Corporation. All rights reserved. This code is licensed under the [NVIDIA Source Code License](https://github.com/NVlabs/HANDAL/blob/main/LICENSE.txt).


## Acknowledgment
Expand Down
7 changes: 2 additions & 5 deletions train/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Deep Object Pose Estimation (DOPE) - Training

This repo contains a simplified version of the **training** script for DOPE.
This repo contains a simplified version of the training pipeline for DOPE.
Scripts for inference, evaluation, and data visualization can be found in this repo's top-level directories `inference` and `evaluate`.

A user report of training DOPE on a single GPU using NVISII-created synthetic data can [be found here](https://github.com/NVlabs/Deep_Object_Pose/issues/155#issuecomment-791148200).
Expand All @@ -18,7 +18,7 @@ source ./output/dope_training/bin/activate
---
To install the required dependencies, run:
```
pip install -r requirements.txt
pip install -r ../requirements.txt
```

## Training
Expand Down Expand Up @@ -65,6 +65,3 @@ python debug.py --data PATH_TO_IMAGES
2. If you are running into dependency issues when installing,
you can try to install the version specific dependencies that are commented out in `requirements.txt`. Be sure to do this in a virtual environment.

## License

Copyright (C) 2018 NVIDIA Corporation. All rights reserved. Licensed under the [CC BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode).
85 changes: 0 additions & 85 deletions train/run_pipeline_on_ngc.py

This file was deleted.

5 changes: 4 additions & 1 deletion weights/readme.md
Original file line number Diff line number Diff line change
@@ -1 +1,4 @@
This is where you need to store the weights.
We have trained and tested DOPE with two publicaly available datasets: YCB, and HOPE. These trained weights can be
[downloaded from Google Drive](https://drive.google.com/drive/folders/1DfoA3m_Bm0fW8tOWXGVxi4ETlLEAgmcg).


0 comments on commit b71f271

Please sign in to comment.