Skip to content
This repository has been archived by the owner on Jul 26, 2024. It is now read-only.

Object Boundary "Leak" in RGB-D Image and PointCloud #63

Closed
msr-peng opened this issue Aug 24, 2019 · 4 comments
Closed

Object Boundary "Leak" in RGB-D Image and PointCloud #63

msr-peng opened this issue Aug 24, 2019 · 4 comments
Labels
bug Something isn't working wontfix This will not be worked on

Comments

@msr-peng
Copy link

The problem still exists in the last SDK.
Not already resolved

Describe the bug
There is severe overlaps at the boundary of the object in /rgb_to_depth/image_raw topic. You can just see my hand below. It just looks like there is another hand close behind my real hand.
misOverlap
Besides, the pointclouds topic also seems suffer from this boundary problem. It make the object's looks "larger" due to the overlap of the object's boundary. What's more, the pointcloud also has approximate 1 second lag, make real-time application based on Azure Kinect ROS Driver difficult.

To Reproduce
My driver.launch parameters configurations:
driverParam
run

roslaunch Azure_Kinect_ROS_Driver/launch/driver.launch

and then show /rgb_to_depth/image_raw in rqt_image_view

Expected behavior
A registered RGB-D image with corrrect and sharp boundary. A real-time pointcloud2 data within low lag (just consider Kinect V1 camera's real-time pointcloud data performance in Rviz).

@msr-peng msr-peng added bug Something isn't working triage needed The Issue still needs to be reviewed by the Azure Kinect ROS Driver Team labels Aug 24, 2019
@RoseFlunder
Copy link
Contributor

I already reported the performance issue (lag of colorized point cloud) in #56. So that problem is confirmed and being looked into.

@RoseFlunder
Copy link
Contributor

I can confirm that I see the overlaps as well in the colorized depth image and the colorized pointcloud.

@skalldri
Copy link
Contributor

The phenomenon you're describing we refer to as "color spill", where color from one object "spills" onto another object in the depth image.

It isn't really possible to resolve this issue on the /rgb_to_depth topic due to the geometry of the camera. This is why you should be using /depth_to_rgb topics when colorizing the point cloud. I provided the /rgb_to_depth topic for completeness, but I'm considering removing it since it isn't really useful. I'll try to explain why this happens.

On the Azure Kinect, like most other RGB-D cameras, the depth and color cameras are physically separate. In the factory, a calibration process captures the precise relative position and orientation of the two cameras. This allows us to transform data captured from one camera into the perspective of the other camera. There are two ways to do this: transforming the image from the depth camera into the RGB camera perspective (/depth_to_rgb) and the inverse, transforming the image from the RGB camera into the perspective of the depth camera (/rgb_to_depth).

Because the RGB and depth cameras are separated by ~3 cm, they observe slightly different parts of the scene. Some parts of the scene that are visible to the depth camera are not visible to the RGB camera, and likewise some parts of the scene visible to the RGB camera are not visible to the depth camera. The closer the object is to the camera, the greater the difference is between the observation from the depth camera and the color camera.

So why do we get worse color spill in the /rgb_to_depth topics? Well, consider what the system is doing. It's taking a 2D image with no depth information (the BGRA image from the RGB camera) and re-projecting it into the perspective of the depth camera. The trouble is, the depth camera has observed some parts of the scene that the RGB camera did not, and vice versa. When we try to colorize depth pixels that the RGB camera didn't actually observe, we have no way to know: the RGB camera didn't capture depth information, so we don't actually know where in 3D space the RGB pixel came from. This results in color spill, as RGB pixels are taken from incorrect objects and applied to the depth pixels.

Now consider the inverse case: where we transform the depth image into the RGB camera perspective. Again, the depth camera has captured some information about the scene that is invisible to the RGB camera, and the RGB camera has captured some information about the scene that is invisible to the depth camera. When we transform the depth image into the RGB co-ordinate space, any RGB pixels that were invisible to the depth camera will be invalidated automatically (since they have a NaN depth value). But what do we do about depth pixels for which we have no color information? Isn't this the same problem?

Well, because the depth image is 3 dimensional, we can "simulate" what a depth image would have looked like from the RGB camera position. This gives us a second set of invalid pixels: the depth pixels that are invisible to the RGB camera. When we invalidate both sets of pixels, (depth pixels that the depth camera did not observe, and depth pixels that would have been invisible if the depth camera was in the position of the RGB camera) we get a nice clean RGB-D image with very little spill.

@skalldri skalldri added wontfix This will not be worked on and removed triage needed The Issue still needs to be reviewed by the Azure Kinect ROS Driver Team labels Aug 26, 2019
@skalldri
Copy link
Contributor

On the topic of performance, there are number of factors to consider here.

There is a known bug that is causing high-resolution colorized point clouds to lag behind real-time: see #56.

In RViz, consider that the Azure Kinect is producing an order of magnitude more data that the Kinect for Xbox One: see the comparison document here for more details. This makes the Azure Kinect much more demanding to visualize in RViz.

The Azure Kinect does more post-processing of the data in the depth engine and on the host CPU when compared to the Kinect for Xbox One: It will require more processing on the host PC to run the Azure Kinect. I'll look into adding a warning in the node to indicate if the sensor processing loop has fallen behind real-time.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working wontfix This will not be worked on
Projects
None yet
Development

No branches or pull requests

3 participants