Depth sensor not returing NAN.
Posted: 11 Mar 2017, 01:49
Hi,
I am using a kinect sensor model which has a depth camera, and publishing it to ROS through rosinterface. My problem is that, when there is nothing in front of the camera, the depth image returns maximum range (far clipping plane) in the depth image. However, in reality, it should return NAN. This causes problem when for example a mapping algorithms in ROS that uses those data do not confuse it with depth of an object at the maximum range. Is there any setting that I am missing so that it would return NAN.
Thank you so much in advance.
I am using a kinect sensor model which has a depth camera, and publishing it to ROS through rosinterface. My problem is that, when there is nothing in front of the camera, the depth image returns maximum range (far clipping plane) in the depth image. However, in reality, it should return NAN. This causes problem when for example a mapping algorithms in ROS that uses those data do not confuse it with depth of an object at the maximum range. Is there any setting that I am missing so that it would return NAN.
Thank you so much in advance.