Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Created example with DOPE and DIff-DOPE integration through ROS actinolib. #7

Open
wants to merge 47 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 39 commits
Commits
Show all changes
47 commits
Select commit Hold shift + click to select a range
3a6d6f2
Created a config file for demo with multi-object with ROS and DOPE
rpapallas Jan 18, 2024
457dc44
Modified Scene dataclass to allow memory image representation
rpapallas Jan 18, 2024
053f252
Updated model paths and added rgb/depth topics.
rpapallas Jan 18, 2024
55e1d0a
Corrected typo
rpapallas Jan 18, 2024
0a465bf
Minor changes to yaml file
rpapallas Jan 18, 2024
b361465
Added some HOPE object models as examples.
rpapallas Jan 18, 2024
deb545f
Added .mp4 in gitignore
rpapallas Jan 18, 2024
e7ccf35
Implemented first version of ROS+DOPE demo with multi-object tracking.
rpapallas Jan 18, 2024
e77f4eb
Ignore _build directory in docs
rpapallas Jan 18, 2024
66d318c
Created a new docs page for demos
rpapallas Jan 18, 2024
eeea44d
Added URL to the ROS bag.
rpapallas Jan 19, 2024
c36572b
Added readme to demos to help people realise that the docs include in…
rpapallas Jan 19, 2024
e6fe8f3
Converted the demo into a catkin package
rpapallas Jan 19, 2024
6d08c96
Updated msgs
rpapallas Jan 19, 2024
0b335f3
Implemented first version of actionlib server
rpapallas Jan 19, 2024
40fa711
Implemented a class to support segmentation based on segment-anything
rpapallas Jan 19, 2024
4406f5a
Reformmating based on pre-commit rules
rpapallas Jan 19, 2024
3eb186c
Added some segmentation test images
rpapallas Jan 19, 2024
195f74d
Renamed segmentation class
rpapallas Jan 19, 2024
239bc07
Added script to build and zip website for deployment
rpapallas Jan 19, 2024
a699d7f
Updated meshes of HOPE objects.
rpapallas Jan 22, 2024
e932625
Implemented actionlib demo
rpapallas Jan 22, 2024
aadfe1f
Updated the mesh for bbq sauce
rpapallas Jan 22, 2024
ddc0dd7
Updated meshes for other HOPE objects.
rpapallas Jan 22, 2024
3dbd21f
Camera intrinsic params dynamically retrieved from ROS topic.
rpapallas Jan 22, 2024
fdffe0e
General refactoring
rpapallas Jan 22, 2024
b7bdf99
Updated docs to include instructions of how to run the demo and how t…
rpapallas Jan 22, 2024
a9cd963
Updated the roslaunch commands for refine client
rpapallas Jan 23, 2024
463c8cb
Updated model's frame orientation to match DOPE
rpapallas Mar 7, 2024
56a8248
Update demos.rst
rpapallas Mar 7, 2024
494c12c
Minor change.
rpapallas Apr 3, 2024
b2e6901
Reverted models back to original format and instead added transformat…
rpapallas Apr 3, 2024
fd5c2c4
Minor changes.
rpapallas Apr 3, 2024
c709929
Updated models.
rpapallas Apr 5, 2024
51c84cb
Fixed pose output issue
rpapallas Apr 5, 2024
7495914
Merge branch 'ros-example' of github.com:rpapallas/diff-dope into ros…
rpapallas Apr 5, 2024
a033fb1
Removed unused libraries.
rpapallas Apr 5, 2024
08d465a
Updated docs.
rpapallas Apr 5, 2024
8520f05
Removed old model of orange_juice and added object textures
rpapallas Apr 5, 2024
a3198fc
Added ROS example in README.
rpapallas Apr 27, 2024
48bd007
Moved diffdope_ros to root dir.
rpapallas Apr 27, 2024
4eeeceb
Updated docs to include step to run DOPE.
rpapallas Apr 27, 2024
8aaa647
Made rgb and depth callbacks atomic and not one per object.
rpapallas Apr 30, 2024
6a8f526
Added check to ensure all data available before refining.
rpapallas Apr 30, 2024
13a41fd
Added limit of saved videos per run to avoid saving too many during c…
rpapallas Apr 30, 2024
6c335fe
Implemented continuous tracking and publishing of refined pose to a t…
rpapallas Apr 30, 2024
4b7d46f
Updated docs to include details about continuous tracking launch file
rpapallas Apr 30, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ docs/_buid/
*.png
*.jpg
*.jpeg
*.mp4
41 changes: 22 additions & 19 deletions diffdope/diffdope.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,10 @@

import collections
import io
import sys
import math
import pathlib
import random
import sys
import warnings
from dataclasses import dataclass
from itertools import repeat
Expand All @@ -36,7 +36,7 @@

# for better print debug
print()
if not hasattr(sys, 'ps1'):
if not hasattr(sys, "ps1"):
print = ic

# A logger for this file
Expand Down Expand Up @@ -209,9 +209,14 @@ def render_texture_batch(
depth = depth.reshape(shape_keep)[..., 2] * -1

# mask , _ = dr.interpolate(torch.ones(pos_idx.shape).cuda(), rast_out, pos_idx)
mask, _ = dr.interpolate(torch.ones(pos_idx.shape).cuda(),
rast_out, pos_idx[0],rast_db=rast_out_db,diff_attrs="all")
mask = dr.antialias(mask, rast_out, pos_clip_ja, pos_idx[0])
mask, _ = dr.interpolate(
torch.ones(pos_idx.shape).cuda(),
rast_out,
pos_idx[0],
rast_db=rast_out_db,
diff_attrs="all",
)
mask = dr.antialias(mask, rast_out, pos_clip_ja, pos_idx[0])

# compute vertex color interpolation
if vtx_color is None:
Expand All @@ -237,7 +242,7 @@ def render_texture_batch(
color = color * torch.clamp(rast_out[..., -1:], 0, 1) # Mask out background.
if not return_rast_out:
rast_out = None
return {"rgb": color, "depth": depth, "rast_out": rast_out, 'mask':mask}
return {"rgb": color, "depth": depth, "rast_out": rast_out, "mask": mask}


##############################################################################
Expand Down Expand Up @@ -973,7 +978,7 @@ def __init__(
self.qx = None # to load on cpu and not gpu

if model_path is None:
self.mesh = None
self.mesh = None
else:
self.mesh = Mesh(path_model=model_path, scale=scale)

Expand Down Expand Up @@ -1227,11 +1232,11 @@ def set_batchsize(self, batchsize):
Args:
batchsize (int): batchsize for the tensors
"""
if not self.path_img is None:
if not self.tensor_rgb is None:
self.tensor_rgb.set_batchsize(batchsize)
if not self.path_depth is None:
if not self.tensor_depth is None:
self.tensor_depth.set_batchsize(batchsize)
if not self.path_segmentation is None:
if not self.tensor_segmentation is None:
self.tensor_segmentation.set_batchsize(batchsize)

def get_resolution(self):
Expand All @@ -1241,17 +1246,17 @@ def get_resolution(self):
Return
(list): w,h of the image for optimization
"""
if not self.path_img is None:
if not self.tensor_rgb is None:
return [
self.tensor_rgb.img_tensor.shape[-3],
self.tensor_rgb.img_tensor.shape[-2],
]
if not self.path_depth is None:
if not self.tensor_depth is None:
return [
self.tensor_depth.img_tensor.shape[-2],
self.tensor_depth.img_tensor.shape[-1],
]
if not self.path_segmentation is None:
if not self.tensor_segmentation is None:
return [
self.tensor_segmentation.img_tensor.shape[-3],
self.tensor_segmentation.img_tensor.shape[-2],
Expand All @@ -1262,11 +1267,11 @@ def cuda(self):
Put on cuda the image tensors
"""

if not self.path_img is None:
if not self.tensor_rgb is None:
self.tensor_rgb.cuda()
if not self.path_depth is None:
if not self.tensor_depth is None:
self.tensor_depth.cuda()
if not self.path_segmentation is None:
if not self.tensor_segmentation is None:
self.tensor_segmentation.cuda()


Expand Down Expand Up @@ -1424,7 +1429,7 @@ def render_img(

else:
crop = find_crop(self.optimization_results[index][render_selection][0])

if batch_index is None:
# make a grid
if self.cfg.render_images.crop_around_mask:
Expand Down Expand Up @@ -1656,7 +1661,6 @@ def run_optimization(self):
if self.scene.tensor_segmentation is not None:
self.gt_tensors["segmentation"] = self.scene.tensor_segmentation.img_tensor


pbar = tqdm(range(self.cfg.hyperparameters.nb_iterations + 1))

for iteration_now in pbar:
Expand Down Expand Up @@ -1728,4 +1732,3 @@ def cuda(self):
self.object3d.cuda()
self.scene.cuda()
self.camera.cuda()
pass
1 change: 1 addition & 0 deletions docs/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
_build/
16 changes: 10 additions & 6 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
# -- Project information -----------------------------------------------------
# https://www.sphinx-doc.org/en/master/usage/configuration.html#project-information

import os
import sys
import os

# Ensure Sphinx uses the correct Python interpreter
Expand Down Expand Up @@ -43,20 +41,26 @@
html_theme = "sphinx_rtd_theme"
# html_static_path = ["_static"]


# In your conf.py file
def autodoc_skip_member(app, what, name, obj, skip, options):
# Ref: https://stackoverflow.com/a/21449475/
exclusions = ('make_grid', # special-members
'__doc__', '__module__', '__dict__', # undoc-members
)
exclusions = (
"make_grid", # special-members
"__doc__",
"__module__",
"__dict__", # undoc-members
)
exclude = name in exclusions
# return True if (skip or exclude) else None # Can interfere with subsequent skip functions.
return True if exclude else None


def setup(app):
app.connect("autodoc-skip-member", autodoc_skip_member)


import sys

print("Python Version:")
print(sys.version)
print(sys.version)
99 changes: 99 additions & 0 deletions docs/demos.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
Demos
================

Multi-object using ROS and DOPE with RGB and Depth
--------------------------------------------------

The demo uses RGB and Depth from a RealSense camera, poses published by DOPE,
and segments the image using
`segment-anything <https://github.com/facebookresearch/segment-anything>`_.
The demo comes as a catkin package under ``examples/``.

Say you cloned the diff-dope repo under your home directory (i.e., ``~/diff-dope``),
and you have a catkin workspace under your home directory too (i.e., ``~/catkin_ws``),
you can create a symlink of the package there:

.. code::

cd ~/catkin_ws/src
ln -s ~/diff-dope/examples/diffdope_ros .

You can ``catkin_make`` under ``~/catkin_ws`` now to build the package.
You can, of course, move the package there instead of creating a symlink.

The demo uses a configuration under
``diffdope_ros/config/multiobject_with_dope.yaml`` which uses DOPE for initial
pose estimation, and dictates topics for RGB and Depth (assuming a RealSense
sensor). It also uses a camera info topic to retrieve camera intrinsic
parameters as well as the image dimensions.

You can also download a ROS bag from `here <https://leeds365-my.sharepoint.com/:u:/g/personal/scsrp_leeds_ac_uk/Ec-TbyOr1QVIt6NQQP7E4pABkEUmaEGByVjLHugY7Als_A?e=JES96n>`_
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please consider replacing this link with yours, as this link will expire on 5th of May.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do, I am in vacation atm, but I will test this out when I am back.

to play with this demo, without the need of a real sensor.

If you wish to use the ROS bag, you can play it back in a loop like so:

.. code::

rosbag play -l ~/path/to/simple.bag


You need to install
`segment-anything <https://github.com/facebookresearch/segment-anything>`_
and download the weights of the model.

.. code::

pip install git+https://github.com/facebookresearch/segment-anything.git

Then head to the
`model checkpoints section <https://github.com/facebookresearch/segment-anything?tab=readme-ov-file#model-checkpoints>`_,
and download any of the model checkpoints. Place it somewhere on your
computer and update the ``segment_anything_checkpoint_path`` of the config file
to let it know where to find it. If you want to test the segmentation
functionality independently, you can run the following Python script:
``diffdope_ros/scripts/segmentator.py``.

From there you can run the demo like so:

.. code::

roslaunch diffdope_ros server.py # To start the actionlib server

# Refine pose for individual object
roslaunch diffdope_ros refine.launch object_name:=bbq_sauce
roslaunch diffdope_ros refine.launch object_name:=alphabet_soup
roslaunch diffdope_ros refine.launch object_name:=mustard

# or don't pass object_name to refine the pose of all the objects in the config
roslaunch diffdope_ros refine.launch

The above names are derived from the config file, ``config/multiobject_with_dope.yaml``.
The launch files pass this file to the server and refine scripts.

Parameters
************************

Please inspect the ``config/multiobject_with_dope.yaml`` file to see certain
parameters. For example, by default this demo will produce videos of the
optimisation, however you can turn this off through the config file
to speed things up. You can also adjust certain optimisation parameters from
the config file.

Dealing with DOPE and model coordinate frames
************************

Please note the following important details when you try to use a new object
and pose from DOPE:

* DOPE pose output may not match the coordinate frame used in the 3D model of
the object you wish to use. In this case, you need to apply a static
transformation to bring the DOPE pose output to match the one used in your 3D
model. DOPE provides a way in the config file (``model_transforms``) to define such transformation
per object. For more details on this subject, please read `this <https://github.com/NVlabs/Deep_Object_Pose/issues/346>`_.
* The scaling of the object is important. We suggest that you scale your 3D object
in Blender to bring it closer to the scale of the examples. For example,
the HOPE objects as downloaded from the official repository, we had to scale them
by a factor of 10. Although a parameter to scale the 3D object in the config
is available, we had difficulties to get it to work properly and found better
luck by manually scaling the 3D object in Blender. You can import a reference
object (like the BBQ Sauce model we provide) in Blender to see the scale.
1 change: 1 addition & 0 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Welcome to Diff-DOPE's documentation!
:caption: Contents:

modules
demos

Indices and tables
==================
Expand Down
Loading