## Lidar velodyne

Typically: "How do I... ", "How can I... " questions
qiwang
Posts: 17
Joined: 25 Feb 2013, 13:34

### Lidar velodyne

Hallo,

Is it possible to create the Velodyne Lidar also with the vision sensor?

Thanks

coppelia
Posts: 7396
Joined: 14 Dec 2012, 00:25

### Re: Lidar velodyne

Hello,

that should be no problem at all. The real sensor is rotating, and you could also model it like rotating (by using a single vision sensor object mounted on a revolute joint). But you could also simply use 3 non-rotating vision sensors that will deliver the same 360 degree points.

Cheers

qiwang
Posts: 17
Joined: 25 Feb 2013, 13:34

### Re: Lidar velodyne

Hello,

I have tried to model the Velodyne Lidar by using a single vision sensor object mounted on a revolute joint, and I set the vision sensor to different directions in each simulation step, but it seems the vision sensor always return the same data in every direction. I wonder if I can set the vision sensor to update the data for each direction in the same simulation step.

To use 3 non-rotating vision sensors do produces 360 degree points, but the points forms a triangular pattern on the ground instead of a circular pattern as Velodyne Lidar. This might be a problem if the algorithm depends on the spacial points pattern of Velodyne Lidar.

Plus, I want to modify the numbers of the laser beams of fastHokuyo, but I could not find any parameter to change it.

Thanks

coppelia
Posts: 7396
Joined: 14 Dec 2012, 00:25

### Re: Lidar velodyne

I would use a non-threaded child script, executed in the sensing phase for your task (Execute in the sensing phase):

then, at each simulation pass, your child script will be called (after the action phase) and there you can read one image, rotate the joint, read another image, etc. until you have scanned what you wanted in that simulation step.

For that to work correctly, you need to handle the vision sensor manually (i.e. explicit handling). In the visions sensor properties, check Explicit handling. Then, from within your child script you could write:

Code: Select all

-- Read first depth points:
simHandleVisionSensor(visionSensorHandle)
-- rotate the revolute joint
simHandleVisionSensor(visionSensorHandle)
.
.
.

To change the number of depth points returned, check the filter component Extract coordinates from work image. Double-click it and edit the Point count along X and Point count along Y values. Also make sure the resolution of the vision sensor is appropriate (similar to the Point count along X/Y).

Cheers

qiwang
Posts: 17
Joined: 25 Feb 2013, 13:34

### Re: Lidar velodyne

Thanks, it worked perfect, this is exactly what I want :)

qiwang
Posts: 17
Joined: 25 Feb 2013, 13:34

### Re: Lidar velodyne

Hello, there is a question that I always wonder. Does the computation of vision sensor based laserscanner depend on the graphic card? If it does, does it run faster with a better graphic card, or maybe with a setup of dual graphic cards.

Thanks

coppelia
Posts: 7396
Joined: 14 Dec 2012, 00:25

### Re: Lidar velodyne

Hello,

yes, you are right, the vision sensors are based on OpenGl instructions, and directly depend on the GPU. So they will run faster with a better graphic card.

Cheers

coppelia