-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Created example with DOPE and DIff-DOPE integration through ROS actinolib. #7
base: main
Are you sure you want to change the base?
Conversation
The Scene dataclass used to rely on the path attributes to perform operations on the tensors; it seems unnecessary given that the tensors aren't None. This would allow one to set tensors directly without having to provide paths to files (say someone using ROS topics and reads images from a topic message).
…strucitons of how to run the demos.
transform[0, 3] /= 10 | ||
transform[1, 3] /= 10 | ||
transform[2, 3] /= 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One small issue that is currently hard-coded is that I observed the output of diff-dope was off by a fixed error. Here, I divide the output by 10 to correct it. I understand this is not ideal and ugly, but I am not sure where this error comes from. This is strange because it seems that Diff-DOPE expects the initial position estimate in millimetres, yet the output seems more close to metres? The 3D model is scaled to match the scale of your provided example by 10, but I don't understand how this would affect the output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I had some issues in there, and these are somehow are to pin point, thank you for this, maybe we should make this a hyper params, as the 3d model will have an impact on the representation. I have a vague recollection that I needed this, but did it just to match my older code.
sensor). It also uses a camera info topic to retrieve camera intrinsic | ||
parameters as well as the image dimensions. | ||
|
||
You can also download a ROS bag from `here <https://leeds365-my.sharepoint.com/:u:/g/personal/scsrp_leeds_ac_uk/Ec-TbyOr1QVIt6NQQP7E4pABkEUmaEGByVjLHugY7Als_A?e=JES96n>`_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please consider replacing this link with yours, as this link will expire on 5th of May.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will do, I am in vacation atm, but I will test this out when I am back.
ddope.make_animation(output_file_path=video_path) | ||
rospy.loginfo(f"Video saved at {video_path}") | ||
|
||
def __convert_opengl_pose_to_opencv(self, transform): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Diff-DOPE output seems to be in the OpenGL coordinate system, so this aims to apply a rotation of 180 degrees around the x-axis to bring this to the OpenCV coordinate system used by DOPE.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
diff-dope is in opengl, nvdiffrast is in opengl. So this is good.
position = np.array([dope_position.x, dope_position.y, dope_position.z]) | ||
|
||
# Convert to mm | ||
position *= 1_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From your provided example, it seems that Diff-DOPE expects the input position in millimetres, here I convert DOPE's reading from metres to millimetres if my assumption is correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
:D
@TontonTremblay a gentle reminder about this PR. |
Ok I am looking at it today, could you add instructions on how to run the code with dope to the readme, or make an other readme in a folder ros or something like that. I am not sure where to start to run an example atm. |
Ok I see this docs/demo. Could you add a reference to it in the readme? |
Maybe diffdope_ros should be its own folder instead of example? What do you think? |
Maybe in the doc adding a step where you would run DOPE model? |
|
#7 (comment) |
Hi, it should be server.launch not .py sorry on mobile to double check but I think it's .launch not .py |
I was able to run the code all the way to refining bbq_sauce. I would say that the doc is great, but I think we would need a little bit more step by step work. But I dont know where the refined pose gets published. |
Could the code say something about waiting for image topic or other topics it cannot find? |
The refine.py will subscribe to rgb, depth and dope pose topics. It will then make an actionlib request to refine the pose given an rgb and depth frame and the DOPE pose. The server responds back with the refined pose. There is no topic in this example for the refined pose. |
Ok I think we would need so ways to republish the pose no? Also if something is not loading it would be nice that it said so. |
@TontonTremblay pushed some more commits addressing some of your comments. I will add checks in |
…ontinuous tracking.
Hi @TontonTremblay, I think I have addressed all the comments:
Please let me know what you think. |
server.py -> server.launch
this is what I have at the moment, no success or failure message and it does not seem to be running. Thank you for continuous :P I like it. I feel like we are getting close. Probably just some clarifications we can add to the doc and I think we are up and running. |
Sorry a combination of work and travelling put that on the side. The server shows the progress of the refinement. If you don't want to get certain objects tracked (i.e. the pose topic isn't available) you need to remove it from the config file. Strange, my version works with object_name:=bbq_sauce and it will spit out a pose after the refinement. |
Yeah no worries! Work and life happen. I really appreciate all the effort you put in tbh and I am quite impressed. |
Thanks, Jonathan! I encountered the issue with killing too just today. I will fix that. I will also try to update docs to be more helpful. Is there anything else you want to see to this before it gets merged? Rafael |
Hello,
I created a demo that uses DOPE and DIff-DOPE through ROS and an actionlib server/client architecture. This PR brings the following:
refine.py
is the client which makes call to theserver.py
, therefine.py
can send a request to refine a single object's pose or to refine all objects in the scene.refine
node subscribes to topics for RGB and Depth, as well as the topics that publish the object pose from DOPE (can subscribe to the pose of more than one object). All defined in the config file.server
will segment the image on-the-fly (using the DOPE position) and using segment-anything, set up a Diff-DOPEScene
andObject3D
based on the information derived from the ROS topics (rgb frame, depth frame, segmentation, DOPE pose), run the optimisation and return the refined pose (converted to OpenCV from OpenGL).Please let me know if you have any questions or if you have any suggestions for improvement.
Thank you,
Rafael