Page 1 of 2

Kinect and ROS

Posted: 28 Feb 2013, 19:00
by eozkucur
Hi,

I am trying to simulate Kinect in V-REP and obtain sensor_msgs::PointCloud2 or pcl::PointCloud<pcl::PointXYZRGB> from a ROS node. What is the best approach to do this?

When I add Kinect model to a scene, it includes a vision sensor which provides a depth image as sensor_msgs/Image or depth buffer as vrep_common/VisionSensorDepthBuff. How can I obtain the corresponding color image and register it with depth buffer to build pcl::PointCloud<pcl::PointXYZRGB> in a ROS node?

Thanks in advance
Ergin

Re: Kinect and ROS

Posted: 28 Feb 2013, 19:54
by coppelia
Hello,

Have a look at the demo scene "rosTopicPublisherAndSubscriber.ttt" that comes with the default installation. In particular, inspect the child script attached to object "Vision_sensor". Following 2 instructions are of interest for you if you want to publish the image of a vision sensor:

Code: Select all

-- Retrive the handle of the vision sensor we wish to stream:
visionSensorHandle=simGetObjectHandle('Vision_sensor')
-- Now enable topic publishing and streaming of the vision sensor's data:
topicName=simExtROS_enablePublisher('visionSensorData',1,simros_strmcmd_get_vision_sensor_image,visionSensorHandle,0,'')
You will have to set the correct name for the vision sensor of the Kinect model.
To send the depth map, it is very similar, something like:

Code: Select all

topicName=simExtROS_enablePublisher('visionSensorDepthData',1,simros_strmcmd_get_vision_sensor_depth_buffer ,visionSensorHandle,0,'')
Vision sensors cannot (yet) directly produce a point cloud message, but that shouldn't be difficult to implement.

Cheers

Connecting a robot using ROS

Posted: 20 Jun 2013, 06:09
by ktjolsen
Hi,
When you connect a robot using ROS, does that mean that the robot is connected via a cable and sits still while the software in the physical robot is simulated in V-REP?

For example, if I connect iRobot's Roomba via ROS, then Roomba will just sit next to the computer while the cleaning algorithm is implemented in the V-REP scene?

Thank you!

Re: Kinect and ROS

Posted: 20 Jun 2013, 10:11
by coppelia
Hello,

When you use the ROS framework, you are free to do whatever you want:
  • You can have the real controller control the virtual robot. This is often used to improve a real controller and test the real robot offline.
  • You can have the real controller control the real robot and visualize its state on the virtual robot
  • You can have the virtual controller control the real robot
  • and so on...
In that sense, the V-REP ROS API is just a means of connecting two separate entities (e.g. the real controller with V-REP). Additionally, ROS can provide a lot of libraries. What you exactly do in the end it totally up to you.

Cheers

Re: Kinect and ROS

Posted: 09 Jul 2013, 12:56
by cedric.pradalier
Hi,

If if this is still useful, I prepared a patch to the ROS stack that add a publisher for depth data from a vision sensor. The end result is a RGBD point cloud.
In my use case, I will simplify the kinect model to use a single camera and publish both the point cloud and the image from the same rendering.

The patch (against V-Rep 3.0.4) is available at http://ubuntuone.com/1Ie0gWPFWOXHDreFFAg4Cn

Note that in this version, the links to the plugin header files and source files are incorrect, so they would have to be corrected before compiling the package.

The patch also correct a minor bug in the laser publisher that affected the timestamp AFTER publishing the message.

Tested on ubuntu 12.04, 64 bits with ROS Fuerte.

I hope that helps.

Re: Kinect and ROS

Posted: 09 Jul 2013, 14:27
by coppelia
Thanks Cedric!

We'll try to integrate the patch for next release.

Cheers

Re: Kinect and ROS

Posted: 15 Aug 2013, 11:44
by xelda1988
Hey Cedric

I applied your patch, but in a LUA script v-rep doesn't know the constant:

simros_strmcmd_get_depth_sensor_data

which it should know know after applying the patch. The file where it should know from is v_repConst.h which is in some of the subdirectories of programming/.

I rebuilt v-rep 3.04 from source, but it somehow doesn't apply the changes, whats the problem?

Cheers, Alex

Re: Kinect and ROS

Posted: 14 Sep 2013, 01:28
by philotuxo
Hi Cedric,

I guess the version on github is already patched with the patch given in this thread, isn't it?
And could you supply a simple pointcloud publisher model/scene? I couldn't get the idea how to publish pointcloud via Vrep.

Merci,
Serhan

Re: Kinect and ROS

Posted: 16 Sep 2013, 17:22
by formica
My question is simpler. :D
I'm new of ROS. How can I read a depth image streamed by the vrep model of kinect in ROS?
Which stack could be useful to this purpose?

Regards
formica

Re: Kinect and ROS

Posted: 16 Sep 2013, 18:38
by coppelia
Have a look at the source code of the V-REP ROS plugin, file ROS_server.cpp, message: sensor_msgs::PointCloud2. That's the data being streamed, so you will have to subscribe to it. Refer to the ROS and sensor_msgs::PointCloud2 doc for details.

Cheers