Dear all,
I'm trying to speed up the simulation of the Hokuyo and publish LaserScan messages in ROS. In my first implementation I used the Hokuyo_URG_04LX_UG01_ROS which was also suggested in many other topics.
I modified the script such that the sensor was simulated as described here
https://answers.ros.org/question/198843 ... erscanmsg/
However, even though I deleted all the unnecessary code and make it fully functional, I noticed that the simulation significantly slow down. This is due to the way in which the Hokuyo has been modelled - a proximity sensor which rotates to generates all the beams of the laser.
To speed-up the simulation of the sensor I was testing the Hokuyo_URG_04LX_UG01 model. This is faster as it uses camera sensors to simulate the laser scans. However, the main issue with this sensor is that it is not clear which are the correct values which must be given to angle_min, angle_mac and angle_increment when filling the ROS message to be published.
Do you have any idea about how to correctly calculate these parameters from the parameters of the sensors which are used to simulate the scans? The most critical one is angle_increment as the wrong value ends up into a distorted laser scan published in ROS.
Thank you very much for your help.
How can I speed-up the simulation of the Hokuyo in ROS
-
- Posts: 3
- Joined: 06 Mar 2019, 00:17
Re: How can I speed-up the simulation of the Hokuyo in ROS
Hi Mario,
I don't believe you can use a vision sensor to simulate a LIDAR and output a LaserScan message without doing additional resampling.
The reason is pretty simple: the LaserScan message assumes the angle increment is constant. This is a natural for a LIDAR which operates by spinning a mirror at constant velocity, and repeatedly sampling a distance at regular time (angle) intervals.
The LIDAR sensor model based on the proximity sensor replicates this behavior very faithfully.
On the other hand, the "fast" version uses one (or more) vision sensor(s). A vision sensor uses a pinhole camera model to compute the depth image "seen" by the sensor, from which 3D point coordinates are then extracted from. The image is an organized grid of pixels, which are regularly spaced on the image plane. Thus their angle increment is not constant.
If you use an image-based sensor, either you publish your points as a point cloud, and use the pointcloud_to_laserscan node, or do that conversion by yourself in the sensor's script, and publish the points also as a LaserScan.
I don't believe you can use a vision sensor to simulate a LIDAR and output a LaserScan message without doing additional resampling.
The reason is pretty simple: the LaserScan message assumes the angle increment is constant. This is a natural for a LIDAR which operates by spinning a mirror at constant velocity, and repeatedly sampling a distance at regular time (angle) intervals.
The LIDAR sensor model based on the proximity sensor replicates this behavior very faithfully.
On the other hand, the "fast" version uses one (or more) vision sensor(s). A vision sensor uses a pinhole camera model to compute the depth image "seen" by the sensor, from which 3D point coordinates are then extracted from. The image is an organized grid of pixels, which are regularly spaced on the image plane. Thus their angle increment is not constant.
If you use an image-based sensor, either you publish your points as a point cloud, and use the pointcloud_to_laserscan node, or do that conversion by yourself in the sensor's script, and publish the points also as a LaserScan.
-
- Posts: 3
- Joined: 06 Mar 2019, 00:17
Re: How can I speed-up the simulation of the Hokuyo in ROS
Dear Federico,
Thank you very much for your reply. I'm following your suggestion and trying to get the point cloud from the Kinect. I'm following the suggestions you gave here
https://forum.coppeliarobotics.com/viewtopic.php?t=7351
Unfortunately, I'm having few issues making the code working. No colored point cloud is generated.
Below the code that I'm using
I can't find where the mistakes are. Should I add vision sensos filters. In such a case, which ones?
Thank you
Thank you very much for your reply. I'm following your suggestion and trying to get the point cloud from the Kinect. I'm following the suggestions you gave here
https://forum.coppeliarobotics.com/viewtopic.php?t=7351
Unfortunately, I'm having few issues making the code working. No colored point cloud is generated.
Below the code that I'm using
Code: Select all
function sysCall_init()
depthCamHandle=sim.getObjectHandle('kinect_depth')
depthView=sim.floatingViewAdd(0.9,0.9,0.2,0.2,0)
sim.adjustView(depthView,depthCamHandle,64)
colorCamHandle=sim.getObjectHandle('kinect_rgb')
colorView=sim.floatingViewAdd(0.69,0.9,0.2,0.2,0)
sim.adjustView(colorView,colorCamHandle,64)
lineSize=2 -- in points
maximumLines=9999
red={1,0,0}
drawObj = sim.addDrawingObject(sim.drawing_points, lineSize,0,-1,maximumLines,red)
end
function sysCall_sensing()
m01=sim.getObjectMatrix(depthCamHandle,-1)
r1,t1,u1=sim.readVisionSensor(depthCamHandle)
img=sim.getVisionSensorImage(colorCamHandle)
imgsz=sim.getVisionSensorResolution(colorCamHandle)
if u1 then
for j=0,u1[2]-1,1 do
for i=0,u1[1]-1,1 do
-- offset within 3d points array:
w=2+4*(j*u1[1]+i)
-- point in global coordinates
p=sim.multiplyVector(m01,{u1[w+1],u1[w+2],u1[w+3]})
-- indices of the point in the color image:
h=math.floor((i+0.5)/u1[1]*imgsz[1])
k=math.floor((j+0.5)/u1[2]*imgsz[2])
-- offset within the color image:
v=3*(k*imgsz[1]+imgsz[1]-1-h)
-- color value:
c={img[v+1],img[v+2],img[v+3]}
print('p[1] '..p[1]..' p[2] '..p[2]..' p[3] '..p[3]..' c[1] '..c[1]..' c[2] '..c[2]..' c[3] '..c[3])
sim.addDrawingObjectItem(drawObj,{p[1],p[2],p[3],c[1],c[2],c[3]})
end
end
end
end
function sysCall_cleanup()
end
Thank you
Re: How can I speed-up the simulation of the Hokuyo in ROS
Is perhaps Packet1 is blank enabled on the depth sensor? That might cause some issue.
-
- Posts: 3
- Joined: 06 Mar 2019, 00:17
Re: How can I speed-up the simulation of the Hokuyo in ROS
Hi Federico,
Packet1 is blank was enabled. I disabled it. However I had to also modify the customized script associated with the kinect_dept using the following code for sysCall_vision function
However, I tested this approach and unfortunately does not solve the problem of speeding up the simulation. The reason is that the point clound is computed by the simulator.
I'm currently exploring the alternative solution described in these posts
https://forum.coppeliarobotics.com/viewtopic.php?t=7617
and
https://forum.coppeliarobotics.com/viewtopic.php?t=7450
This approach is faster as the point cloud is assembled in a ROS node.
Below you can find the LUA code for getting the rgb and the depth images together with the MultyArray
and this is my code for publishing colored point clouds with the Kinect:
Unfortunately, the published point cloud is totally distorted. Any idea where the mistake could be?
Packet1 is blank was enabled. I disabled it. However I had to also modify the customized script associated with the kinect_dept using the following code for sysCall_vision function
Code: Select all
local retVal={}
retVal.trigger=false
retVal.packedPackets={}
simVision.sensorDepthMapToWorkImg(inData.handle)
local trig,packedPacket=simVision.coordinatesFromWorkImg(inData.handle,{64,64},false)
if trig then
retVal.trigger = true
end
if packedPacket then
retVal.packedPackets[#retVal.packedPackets+1] = packedPacket
end
simVision.workImgToSensorImg(inData.handle)
return retVal
I'm currently exploring the alternative solution described in these posts
https://forum.coppeliarobotics.com/viewtopic.php?t=7617
and
https://forum.coppeliarobotics.com/viewtopic.php?t=7450
This approach is faster as the point cloud is assembled in a ROS node.
Below you can find the LUA code for getting the rgb and the depth images together with the MultyArray
Code: Select all
t=simROS.getTime()
image=sim.getVisionSensorImage(colorCam,0,0,0,0,1)
sim.transformImage(image,{640,480},4)
tb_rgb={}
tb_rgb['header']={stamp=t, frame_id="kinect_rgb"}
tb_rgb['height']=480
tb_rgb['width']=640
tb_rgb['encoding']='rgb8'
tb_rgb['step']=640 * 3
tb_rgb['data']=image
simROS.publish(rgb_pub, tb_rgb)
depthImage=sim.getVisionSensorImage(depthCam,0,0,0,0,1)
sim.transformImage(depthImage,{640,480},4)
tb_depth={}
tb_depth['header']={stamp=t, frame_id="kinect_depth"}
tb_depth['height']=480
tb_depth['width']=640
tb_depth['encoding']='rgb8'
tb_depth['step']=640 * 3
tb_depth['data']=depthImage
simROS.publish(depth_pub, tb_depth)
tb_pcd={}
tb_pcd['data']=sim.getVisionSensorDepthBuffer(depthCam)
simROS.publish(pcd_pub, tb_pcd)
Code: Select all
u_res = 480;
v_res = 640;
near_clip = 0.01;
far_clip = 3.5;
focal_length = (57 * M_PI )/180;
typedef pcl::PointXYZRGB Point;
typedef pcl::PointCloud<Point> Cloud;
typedef std_msgs::Float32MultiArray MultiArray;
Cloud::Ptr img2cloud(cv::Mat rgb, MultiArray depth_msg)
{
std::vector<float> depth = depth_msg.data;
if (rgb.cols != v_res or rgb.rows != u_res or depth.size() != u_res * v_res) {
std::cout << "rgb and depth image size mismatch" << '\n';
exit(0);
}
Cloud::Ptr cloud(new Cloud);
for (size_t u = 0; u < u_res; u++) {
for (size_t v = 0; v < v_res; v++) {
cv::Vec3b rgb_ints = rgb.at<cv::Vec3b>(u, v);
int w = v_res * u + v;
// float z = far_clip * depth[w] + near_clip;
float z = (far_clip-near_clip)*(1 - depth[w]) + near_clip;
// float y = (u - u_res / 2) * z / focal_length;
// float x = (v - v_res / 2) * z / focal_length;
float y = z * sin((v-v_res/2) * focal_length / v_res);
float x = z * sin((u-u_res/2) * focal_length / u_res);
Point p;
p.x = x, p.y = y, p.z = z;
p.r = (int) rgb_ints[0]; p.g = (int) rgb_ints[1]; p.b = (int) rgb_ints[2];
cloud->points.push_back(p);
}
}
cloud->header.frame_id = "kinect_depth";
cloud->width = cloud->points.size();
cloud->height = 1;
return cloud;
}
Unfortunately, the published point cloud is totally distorted. Any idea where the mistake could be?
Re: How can I speed-up the simulation of the Hokuyo in ROS
Please post a screenshot of the "distorted" point cloud.
Are you sure the depth image is a rgb8 image?
I think it should be a 1 channel image.
Are you sure the depth image is a rgb8 image?
I think it should be a 1 channel image.