simulation of a generic lidar sensor

Typically: "How do I... ", "How can I... " questions
Post Reply
maf
Posts: 16
Joined: 16 Jan 2023, 16:34

simulation of a generic lidar sensor

Post by maf »

Hi,
After browsing the documentation, I am still confused about how to simulate a generic lidar sensor.

I would imagine using proximity sensors (as they can measure distance), but then I see that the predefined Velodyne sensor is composed of four _vision_ sensors. On the other hand, other predefined laser scanners in Coppeliasim are actually based on proximity sensors...

- which approach would you recommend to simulate a generic 3D lidar? One where I can set basic parameters such as range, horizontal and vertical resolution, rotation rate, etc.

- what about a 2D lidar? Same approach or not?

- what is so Velodyne-specific in the Coppeliasim predefined model of the Velodyne lidar? Can I generalize it by hacking a bit the associated scripts? Would I also need to change the functions in the simVision plugin? (such as "handleVelodyneVLP16")

Thanks in advance,
Marco

coppelia
Site Admin
Posts: 10375
Joined: 14 Dec 2012, 00:25

Re: simulation of a generic lidar sensor

Post by coppelia »

Hello Marco,

in CoppeliaSim you have the possibility to detect an object and measure its distance to the sensor via two methods: via proximity sensors or via vision sensors.

Which one you want to use depends on your needs. But in general:
  • proximity sensors are more flexible and simpler to use. I'd also use them for a generic lidar sensor. If there are many points/distances to measure, they will however be slower than vision sensors. Using a proximity sensor of type ray mounted onto one or two revolute joints appropriately actuated, you are able to quickly obtain data for a custom sensor.
  • vision sensors are a bit more difficult to use, but they are faster if you need to measure many points/distances. With a vision sensor you basically acquire a depth image, and appropriately evaluate it, extracting relevant points/distances
2D or 3D lidars can be handled the same way. If the angular range of the sensor is larger than about 120 degrees, then you'd have to use at least or additional vision sensor, if you go with vision sensors (for that reason the Velodyne sensors use 4 vision sensors in total). Reading an evaluating a vision sensor depth map can be done in Lua/Python, but that is often too slow. For that reason you have some appropriate vision plugin functions available, e.g. simVision.sensorDepthMapToWorkImg

If you want to go with proximity sensors, have a look at the demo model models/components/sensors/Hokuyo URG 04LX UG01.ttm
Otherwise have a look at the demo model models/components/sensors/Camera pixels to 3D positions.ttm

Cheers

maf
Posts: 16
Joined: 16 Jan 2023, 16:34

Re: simulation of a generic lidar sensor

Post by maf »

Hi, thanks for your quick answer.

I am now looking into using vision sensors, and read the camera-to-3d-points example model, as you suggested.

I put four vision sensors to cover 360°. I use the vision callback in the script associated with each sensor to call `sensorDepthMapToWorkImg` and `coordinatesFromWorkImg` to get (I assume) 3D coordinates for a set of points.

The four vision sensors are child of another entity, which represents the lidar itself (as in the Velodyne model). It also has an associated script, and I think I need to edit the sensing callback.

Makes sense, so far?

In the script of the lidar, how do you suggest I retrieve and compose the points-data, that is separately created by each of the four child sensors?

I cannot learn this from the Velodyne example, as in that case the computation is hidden in a function of the vision plugin... ("handleVelodyneVPL16")

Thanks again

maf
Posts: 16
Joined: 16 Jan 2023, 16:34

Re: simulation of a generic lidar sensor

Post by maf »

After going through different parts of the docs I think I figured that sim.readVisionSensor can be used to retrieve the data returned by the sysCall_vision callback in the vision sensor's script.

But sysCall_vision must respect a convention of returning a table with a field called packedPackets (the docs of "Vision callback functions" illustrate this, but do not stress it explicitly).

Then, the 3rd (and subsequent) return value(s) of sim.readVisionSensor will be the packet returned by the vision callback. (in my case the data is already unpacked for some reason, but I did not find anything about it in the docs).

To summarize, in the context of making a lidar sensor, using four child vision sensors: you can return the packed points in the sysCall_vision callback in the script associated with each vision sensor, and retrieve all the data using sim.readVisionSensor four times, in the sensing callback of the script associated with the lidar itself.

coppelia
Site Admin
Posts: 10375
Joined: 14 Dec 2012, 00:25

Re: simulation of a generic lidar sensor

Post by coppelia »

When using just one vision sensor, one could do everything inside of sysCall_vision. With several vision sensors, you have many different possibilities: e.g. you could also use a string signal (or anything similar) to store/exchange the computed data, which then could be retrieved inside of a central place, via sim.readStringSignal. Transmitted data should obviously be packed with sim.packFloatTable (if not yet packed) and unpacked with sim.unpackFloatTable (additionally, large tables are slow, and it is faster to e.g. concatenate packed tables, then unpack them.

Cheers

maf
Posts: 16
Joined: 16 Jan 2023, 16:34

Re: simulation of a generic lidar sensor

Post by maf »

Hi, I have a few more doubts:

About the packing/unpacking that you mention, is it intended that the data I retrieve with sim.readVisionSensor is unpacked, even though the data I return in the vision sensor script is packed?

Script of the vision sensor:

Code: Select all

function sysCall_vision(inData)
	-- ..
	local _, pts, _ = simVision.coordinatesFromWorkImg(inData.handle, ...
	-- ..
	retVal.packedPackets = {pts}  -- pts is packed!
	return retVal
Script of my lidar sensor (the parent of the vision sensors):

Code: Select all

function sysCall_sensing()
 	-- ..
	_, _, pts = sim.readVisionSensor(vision_sensor_handle) -- pts is already UNpacked!
	-- ..

Regardless of the doubt above: in the sensing callback, I need to process and compose all the point data from the four vision sensors, e.g. apply some rotations and remove the distance data: I need to construct a single container with only the X,Y,Z coordinates. I can then for instance publish this data as a point cloud message in ROS.

This processing done in Lua appears to be very slow. Would you suggest anything different than coding the same routine in C? (Perhaps I can add a new function in the simExtVision plugin??)


One last question: is it correct that the X/Y resolution of the vision sensor should be greater or equal than the pointCount parameter of the simVision.coordinatesFromWorkImg function ?

Thanks a lot for your support.

coppelia
Site Admin
Posts: 10375
Joined: 14 Dec 2012, 00:25

Re: simulation of a generic lidar sensor

Post by coppelia »

Some functions return packed data, since it can be easier and faster to handle in Lua. simVision.coordinatesFromWorkImg is such a function. On the other hand, sim.readVisionSensor will not do anything, except read the data previously acquired inside of a sysCall_vision callback function. So instead, do everything in your callback function. If you have several sensors, then you can consolidate the acquired data by setting e.g. 4 string signals for instance, and reading/using those from another script.

Another option, which probably is the best, would be to modify the simExtVision plugin, so that you can do everything from within the plugin callback, e.g. in a similar way as was done with simVision.createVelodyneVPL16 and simVision.handleVelodyneVPL16.

And about your last question: yes, indeed, your resolution needs to be quite larger than what you are actually reading, since there there is a coordinate transformation happening, and you might lose precision in the center of the sensor.

Cheers

maf
Posts: 16
Joined: 16 Jan 2023, 16:34

Re: simulation of a generic lidar sensor

Post by maf »

Hi, sorry to bring back and old topic, but I re-started working on it...

I was looking into the simExtVision functions about the Velodyne, as hinted before.
  • I assume these functions do something specific to convert the depth map from the vision sensor into something matching the way a lidar works. Is that correct?
    I mean for example the fact that the "field of view" of a lidar is not really a frustum. When pointing e.g. the sample "cameraTo3Dposition" towards the ground, one can see the straight lines with the detected points; but with a true lidar, those lines would be circular (and so they are when using the Velodyne object)
  • are these "corrections" done in simVision.velodyneDataFromWorkImg or in simVision.handleVelodyneVPL16 ? (I assume the first)
  • Is there anything really specific to the VelodyneVLP16 in this implementation? If I change the horizontal and vertical resolution and the vertical FoV (which are input arguments of simVision.velodyneDataFromWorkImg) I would be emulating a different lidar, would I not? (of course subject to adapting the parameters of the 4 vision sensors)
I am sorry to ask these things, but the source code is not exactly easy to follow and I'd rather have some guidelines first.
Thank you for your help.

coppelia
Site Admin
Posts: 10375
Joined: 14 Dec 2012, 00:25

Re: simulation of a generic lidar sensor

Post by coppelia »

The velodyne model basically operates in a very similar manner as a simpler laser range finder: the range finder uses a vision sensor instead of a proximity sensor to do distance detections, mainly because of speed concerns.

With a vision sensor you can acquire a depth map at once. Using the depth map and the position of the vision sensor, you can compute 3D points in the scene, that correspond to each of the depth map pixel. This is what the Velodyne sensor models are doing, but using several vision sensors and several depth maps at once. You get some artifacts, because a depth map is something discreet (there is a jump from one pixel to its neighbouring pixels). One could sub-pixel interpolation in order to reduce those artifacts (or increase the resolution of the vision sensors which however will make things much slower).

The vision plugin, internally computes 3D points from a depth map. The same can be done with function simVision.coordinatesFromWorkImg.

Have a look at the demo model Models/components/sensors/Camera pixels to 3D positions.ttm that illustrates that map to 3D point calculation via 2 different methods. Also, you can notice the same type of artifacts with that model.

Cheers

Post Reply