Lidar velodyne

Typically: "How do I... ", "How can I... " questions
Post Reply
qiwang
Posts: 17
Joined: 25 Feb 2013, 13:34

Lidar velodyne

Post by qiwang » 16 Jun 2014, 14:03

Hallo,

Is it possible to create the Velodyne Lidar also with the vision sensor?

Thanks

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: Lidar velodyne

Post by coppelia » 16 Jun 2014, 23:23

Hello,

that should be no problem at all. The real sensor is rotating, and you could also model it like rotating (by using a single vision sensor object mounted on a revolute joint). But you could also simply use 3 non-rotating vision sensors that will deliver the same 360 degree points.

Cheers

qiwang
Posts: 17
Joined: 25 Feb 2013, 13:34

Re: Lidar velodyne

Post by qiwang » 18 Jun 2014, 13:16

Hello,

Thanks for your reply.
I have tried to model the Velodyne Lidar by using a single vision sensor object mounted on a revolute joint, and I set the vision sensor to different directions in each simulation step, but it seems the vision sensor always return the same data in every direction. I wonder if I can set the vision sensor to update the data for each direction in the same simulation step.

To use 3 non-rotating vision sensors do produces 360 degree points, but the points forms a triangular pattern on the ground instead of a circular pattern as Velodyne Lidar. This might be a problem if the algorithm depends on the spacial points pattern of Velodyne Lidar.

Plus, I want to modify the numbers of the laser beams of fastHokuyo, but I could not find any parameter to change it.


Thanks

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: Lidar velodyne

Post by coppelia » 18 Jun 2014, 14:10

I would use a non-threaded child script, executed in the sensing phase for your task (Execute in the sensing phase):

then, at each simulation pass, your child script will be called (after the action phase) and there you can read one image, rotate the joint, read another image, etc. until you have scanned what you wanted in that simulation step.

For that to work correctly, you need to handle the vision sensor manually (i.e. explicit handling). In the visions sensor properties, check Explicit handling. Then, from within your child script you could write:

Code: Select all

-- Read first depth points:
simHandleVisionSensor(visionSensorHandle)
r,t1,u1=simReadVisionSensor(visionSensorHandle)
-- read data in ui
-- rotate the revolute joint
-- Read more depth points:
simHandleVisionSensor(visionSensorHandle)
r,t1,u1=simReadVisionSensor(visionSensorHandle)
-- read data in ui
.
.
.
To change the number of depth points returned, check the filter component Extract coordinates from work image. Double-click it and edit the Point count along X and Point count along Y values. Also make sure the resolution of the vision sensor is appropriate (similar to the Point count along X/Y).

Cheers

qiwang
Posts: 17
Joined: 25 Feb 2013, 13:34

Re: Lidar velodyne

Post by qiwang » 19 Jun 2014, 06:28

Thanks, it worked perfect, this is exactly what I want :)

qiwang
Posts: 17
Joined: 25 Feb 2013, 13:34

Re: Lidar velodyne

Post by qiwang » 20 Jun 2014, 17:10

Hello, there is a question that I always wonder. Does the computation of vision sensor based laserscanner depend on the graphic card? If it does, does it run faster with a better graphic card, or maybe with a setup of dual graphic cards.

Thanks

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: Lidar velodyne

Post by coppelia » 21 Jun 2014, 03:10

Hello,

yes, you are right, the vision sensors are based on OpenGl instructions, and directly depend on the GPU. So they will run faster with a better graphic card.

Cheers

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: Lidar velodyne

Post by coppelia » 20 Oct 2014, 12:35

Just as an info, there is now a Velodyne model available in V-REP (here for the video). It will be part of next release (the one coming after 3.1.3). A beta can be obtained from us by contacting us here.

Cheers

Post Reply