ROS kinect depth image encoding problem

Typically: "How do I... ", "How can I... " questions
Billie1123
Posts: 62
Joined: 08 Jun 2016, 22:47

Re: ROS kinect depth image encoding problem

Post by Billie1123 »

Ok, good to know.

Thank you very much for your answer, I'll work on it.

Regards

sradmard
Posts: 11
Joined: 06 Oct 2016, 00:39

Re: ROS kinect depth image encoding problem

Post by sradmard »

Hi,
I am also experiencing the same problem. When the depth data is passed to ROS using ROS Interface, simGetVisionSensorDepthBuffer, the image is flipped. Applying the Flip work image vertically also does not work for me.
I was wondering whether the new filter of Work image to output depth image is released or not.
Thank you so much in advance.

ljklonepiece
Posts: 105
Joined: 10 Oct 2013, 14:51

Re: ROS kinect depth image encoding problem

Post by ljklonepiece »

sradmard wrote:Hi,
I am also experiencing the same problem. When the depth data is passed to ROS using ROS Interface, simGetVisionSensorDepthBuffer, the image is flipped. Applying the Flip work image vertically also does not work for me.
I was wondering whether the new filter of Work image to output depth image is released or not.
Thank you so much in advance.
Hi,
Yes, my depth image is also flipped along the horizontal axis.
What I did is to flip back using opencv outside vrep.
Is there any reason that prevents you from doing that?

sradmard
Posts: 11
Joined: 06 Oct 2016, 00:39

Re: ROS kinect depth image encoding problem

Post by sradmard »

Thank you for your reply.
That is possible and I have done it. However, the output image using the flip function in opencv seems to return different values for my depth image with 16UC1 encoding. Also, as Billie1123 has mentioned, the frame_id that gets attached to the depth image seems to not follow the rviz (ROS) convention. Therefore, it gives me tf error in ROS even though I am publishing the tf associated with my depth camera with respect to my robot base.

Specifically, I am extracting laser scan data from the published depth image using depthimage_to_laserscan package. When I apply it to the published depth image from v-rep, the scan values seem to be right, but I can only visualize it in rviz when my fixed frame is set to laser_scan frame.
When I apply depthimage_to_laserscan to my flipped depth image using flip function in opencv, the scan values seem to be wrong.
Also, in both cases, I get an error in rviz when I set the fixed frame to base_link, even though I am publishing the tf between laset_scan and base_link. The error is:
For frame [laser_frame]: No transform to fixed frame [base_link]. TF error: [Lookup would require extrapolation into the future. Requested time 1568576.500000000 but the latest data is at time 236.399993896, when looking up transform from frame [laser_frame] to frame [base_link]]

ljklonepiece
Posts: 105
Joined: 10 Oct 2013, 14:51

Re: ROS kinect depth image encoding problem

Post by ljklonepiece »

sradmard wrote:Thank you for your reply.
That is possible and I have done it. However, the output image using the flip function in opencv seems to return different values for my depth image with 16UC1 encoding. Also, as Billie1123 has mentioned, the frame_id that gets attached to the depth image seems to not follow the rviz (ROS) convention. Therefore, it gives me tf error in ROS even though I am publishing the tf associated with my depth camera with respect to my robot base.

Specifically, I am extracting laser scan data from the published depth image using depthimage_to_laserscan package. When I apply it to the published depth image from v-rep, the scan values seem to be right, but I can only visualize it in rviz when my fixed frame is set to laser_scan frame.
When I apply depthimage_to_laserscan to my flipped depth image using flip function in opencv, the scan values seem to be wrong.
Also, in both cases, I get an error in rviz when I set the fixed frame to base_link, even though I am publishing the tf between laset_scan and base_link. The error is:
For frame [laser_frame]: No transform to fixed frame [base_link]. TF error: [Lookup would require extrapolation into the future. Requested time 1568576.500000000 but the latest data is at time 236.399993896, when looking up transform from frame [laser_frame] to frame [base_link]]

Not sure why flipping changes the value in opencv, but for the tf error you can try the following:
1. rosrun tf view_frames:check the frames.pdf if base_link and laser_frame are linked
2. try the following code to get the correct transformation at the right time

Code: Select all

            while not rospy.is_shutdown():
                try:
                    time = self.tl.getLatestCommonTime(base_link, laser_frame)
                    (trans, qat) = self.tl.lookupTransform(base_link, laser_frame, time)
                    break
                except (tf.Exception,tf.LookupException,tf.ConnectivityException, tf.ExtrapolationException):
                    continue          
3. sometimes restart VREP and everything related work for me:)

sradmard
Posts: 11
Joined: 06 Oct 2016, 00:39

Re: ROS kinect depth image encoding problem

Post by sradmard »

Thank you so much for your kind response.
I managed to solve my rviz problem. The tf was getting published correctly, but the timing of my messages being published to ROS were wrong. I just needed to change the following from my depth image publisher in v-rep:

Code: Select all

    d_depth['header']={seq=0,stamp=simGetSystemTime(), frame_id="base_scan"}
to the following:

Code: Select all

    d_depth['header']={seq=0,stamp=simGetSimulationTime(), frame_id="base_scan"}
It was my System time and Simulation time not machining, and now everything is based on Simulation time.
However, I still cannot figure out why the flip function in opencv messes up the depth data.

Post Reply