-
Notifications
You must be signed in to change notification settings - Fork 225
Object Boundary "Leak" in RGB-D Image and PointCloud #63
Comments
I already reported the performance issue (lag of colorized point cloud) in #56. So that problem is confirmed and being looked into. |
I can confirm that I see the overlaps as well in the colorized depth image and the colorized pointcloud. |
The phenomenon you're describing we refer to as "color spill", where color from one object "spills" onto another object in the depth image. It isn't really possible to resolve this issue on the On the Azure Kinect, like most other RGB-D cameras, the depth and color cameras are physically separate. In the factory, a calibration process captures the precise relative position and orientation of the two cameras. This allows us to transform data captured from one camera into the perspective of the other camera. There are two ways to do this: transforming the image from the depth camera into the RGB camera perspective ( Because the RGB and depth cameras are separated by ~3 cm, they observe slightly different parts of the scene. Some parts of the scene that are visible to the depth camera are not visible to the RGB camera, and likewise some parts of the scene visible to the RGB camera are not visible to the depth camera. The closer the object is to the camera, the greater the difference is between the observation from the depth camera and the color camera. So why do we get worse color spill in the Now consider the inverse case: where we transform the depth image into the RGB camera perspective. Again, the depth camera has captured some information about the scene that is invisible to the RGB camera, and the RGB camera has captured some information about the scene that is invisible to the depth camera. When we transform the depth image into the RGB co-ordinate space, any RGB pixels that were invisible to the depth camera will be invalidated automatically (since they have a NaN depth value). But what do we do about depth pixels for which we have no color information? Isn't this the same problem? Well, because the depth image is 3 dimensional, we can "simulate" what a depth image would have looked like from the RGB camera position. This gives us a second set of invalid pixels: the depth pixels that are invisible to the RGB camera. When we invalidate both sets of pixels, (depth pixels that the depth camera did not observe, and depth pixels that would have been invisible if the depth camera was in the position of the RGB camera) we get a nice clean RGB-D image with very little spill. |
On the topic of performance, there are number of factors to consider here. There is a known bug that is causing high-resolution colorized point clouds to lag behind real-time: see #56. In RViz, consider that the Azure Kinect is producing an order of magnitude more data that the Kinect for Xbox One: see the comparison document here for more details. This makes the Azure Kinect much more demanding to visualize in RViz. The Azure Kinect does more post-processing of the data in the depth engine and on the host CPU when compared to the Kinect for Xbox One: It will require more processing on the host PC to run the Azure Kinect. I'll look into adding a warning in the node to indicate if the sensor processing loop has fallen behind real-time. |
The problem still exists in the last SDK.
Not already resolved
Describe the bug
There is severe overlaps at the boundary of the object in
/rgb_to_depth/image_raw
topic. You can just see my hand below. It just looks like there is another hand close behind my real hand.Besides, the pointclouds topic also seems suffer from this boundary problem. It make the object's looks "larger" due to the overlap of the object's boundary. What's more, the pointcloud also has approximate 1 second lag, make real-time application based on Azure Kinect ROS Driver difficult.
To Reproduce
My
driver.launch
parameters configurations:run
and then show
/rgb_to_depth/image_raw
inrqt_image_view
Expected behavior
A registered RGB-D image with corrrect and sharp boundary. A real-time pointcloud2 data within low lag (just consider Kinect V1 camera's real-time pointcloud data performance in Rviz).
The text was updated successfully, but these errors were encountered: