Hello,
I'm working on a project and have to simulate a proximity sensor. Also I have to calculate the absorbability of all objects in the disc of the proximity sensor. So I can't use the proximity sensor from VRep because this sensor returns only the nearest distance of an object.
I decided to use the point-cloud of a vision sensor and calculate the absorbability from each point on the basis of the color.
My problem now is:
I need the coordinates of the points that I get from the filter extract coordinates from work image. But when I use this filter, I get no color information.
When I use the unfiltered point-cloud of the vision sensor I get the color information, but the coordinates of the points are not relative to the sensor.
Did anybody knows
- how I could fix this problem, that I get the extracted coordinates and the color information
or
- how the extract coordinates from work image-filter transform the coordinates?
Thanks a lot for your answer!
Extract coordinates AND color
Re: Extract coordinates AND color
Hello,
once the pixel coordinates have been extracted from the vision sensor, you would have to find out about the corresponding color pixels for each extracted coordinate. This is a kind of indirection and not very robust I guess.
I would simply read the depth map and rgb image of a vision sensor. Then go through each pixel in the depth map, and compute its 3D position. Then you have the color that is given by the same pixel in the rgb image.
Here is the code for the extract coordinates from work image:
Doing above in a script would be very unefficient. So the easiest would be to do that in a plugin, that exports a script function similar to simExtMyPlugin_computePoints(visionSensorHandle).
Cheers
once the pixel coordinates have been extracted from the vision sensor, you would have to find out about the corresponding color pixels for each extracted coordinate. This is a kind of indirection and not very robust I guess.
I would simply read the depth map and rgb image of a vision sensor. Then go through each pixel in the depth map, and compute its 3D position. Then you have the color that is given by the same pixel in the rgb image.
Here is the code for the extract coordinates from work image:
Code: Select all
bool CSimpleFilter::processAndTrigger_imageToCoord(CVisionSensor* sensor,int sizeX,int sizeY,const float* inputImage,const float* inputDepth,float* outputImage,float* workImage,std::vector<float>& returnData,float* buffer1,float* buffer2,CDrawingContainer2D& drawingContainer)
{
float depthThresh=sensor->getNearClippingPlane();
float depthRange=sensor->getFarClippingPlane()-depthThresh;
float farthestValue=sensor->getFarClippingPlane();
float xAngle=sensor->getViewAngle();
float yAngle=xAngle;
float ratio=float(sizeX)/float(sizeY);
if (sizeX>sizeY)
yAngle=2.0f*(float)atan(tan(xAngle/2.0f)/ratio);
else
xAngle=2.0f*(float)atan(tan(xAngle/2.0f)*ratio);
float xS=sensor->getOrthoViewSize();
float yS=xS;
if (sizeX>sizeY)
yS=xS/ratio;
else
xS=xS*ratio;
int xPtCnt=_intParameters[0];
int yPtCnt=_intParameters[1];
returnData.clear();
returnData.push_back(float(xPtCnt));
returnData.push_back(float(yPtCnt));
if (sensor->getPerspectiveOperation())
{
float yDist=0.0f;
float dy=0.0f;
if (yPtCnt>1)
{
dy=yAngle/float(yPtCnt-1);
yDist=-yAngle*0.5f;
}
float dx=0.0f;
if (xPtCnt>1)
dx=xAngle/float(xPtCnt-1);
float xAlpha=0.5f/(tan(xAngle*0.5f));
float yAlpha=0.5f/(tan(yAngle*0.5f));
float xBeta=2.0f*tan(xAngle*0.5f);
float yBeta=2.0f*tan(yAngle*0.5f);
for (int j=0;j<yPtCnt;j++)
{
float tanYDistTyAlpha=tan(yDist)*yAlpha;
int yRow=int((tanYDistTyAlpha+0.5f)*(sizeY-0.5f));
float xDist=0.0f;
if (xPtCnt>1)
xDist=-xAngle*0.5f;
for (int i=0;i<xPtCnt;i++)
{
float tanXDistTxAlpha=tan(xDist)*xAlpha;
int xRow=int((0.5f-tanXDistTxAlpha)*(sizeX-0.5f));
int indexP=3*(xRow+yRow*sizeX);
float intensity=(workImage[indexP+0]+workImage[indexP+1]+workImage[indexP+2])/3.0f;
float zDist=depthThresh+intensity*depthRange;
C3Vector v(tanXDistTxAlpha*xBeta*zDist,tanYDistTyAlpha*yBeta*zDist,zDist);
float l=v.getLength();
if (l>farthestValue)
{
v=(v/l)*farthestValue;
returnData.push_back(v(0));
returnData.push_back(v(1));
returnData.push_back(v(2));
returnData.push_back(farthestValue);
}
else
{
returnData.push_back(v(0));
returnData.push_back(v(1));
returnData.push_back(v(2));
returnData.push_back(l);
}
xDist+=dx;
}
yDist+=dy;
}
}
else
{
float yDist=0.0f;
float dy=0.0f;
if (yPtCnt>1)
{
dy=yS/float(yPtCnt-1);
yDist=-yS*0.5f;
}
float dx=0.0f;
if (xPtCnt>1)
dx=xS/float(xPtCnt-1);
for (int j=0;j<yPtCnt;j++)
{
int yRow=int(((yDist+yS*0.5f)/yS)*(sizeY-0.5f));
float xDist=0.0f;
if (xPtCnt>1)
xDist=-xS*0.5f;
for (int i=0;i<xPtCnt;i++)
{
int xRow=int((1.0f-((xDist+xS*0.5f)/xS))*(sizeX-0.5f));
int indexP=3*(xRow+yRow*sizeX);
float intensity=(workImage[indexP+0]+workImage[indexP+1]+workImage[indexP+2])/3.0f;
float zDist=depthThresh+intensity*depthRange;
returnData.push_back(xDist);
returnData.push_back(yDist);
returnData.push_back(zDist);
returnData.push_back(zDist);
xDist+=dx;
}
yDist+=dy;
}
}
return(false); // we don't trigger
}
Cheers
Re: Extract coordinates AND color
Tank you!
I tried this too, but I have a problem with compute the 3D-coordinates of each point so that the origin of the coordinate system from the point cloud is exactly in the middle respectively on the sensor.
E.g. I have a 2/2 Resolution, my points are [(0,0),(0,1),(1,0),(1,1)] (the order is secondary)
-> Here the origin is at (0,0)... I need the origin at (0.5,0.5). But changing the x-y-position of the points is not the problem.
When now the sensor detects an object, the point (which detects the object) changes the z-position - not with the angle to the position of the "new" coordinate origin rather with the angle of the "first" coordinate origin at the point (0,0).
So I have not the "real" position of the point relative to the sensor.
I hope you understand my problem. It's a bit confusing.
Thanks a lot!
I tried this too, but I have a problem with compute the 3D-coordinates of each point so that the origin of the coordinate system from the point cloud is exactly in the middle respectively on the sensor.
E.g. I have a 2/2 Resolution, my points are [(0,0),(0,1),(1,0),(1,1)] (the order is secondary)
-> Here the origin is at (0,0)... I need the origin at (0.5,0.5). But changing the x-y-position of the points is not the problem.
When now the sensor detects an object, the point (which detects the object) changes the z-position - not with the angle to the position of the "new" coordinate origin rather with the angle of the "first" coordinate origin at the point (0,0).
So I have not the "real" position of the point relative to the sensor.
I hope you understand my problem. It's a bit confusing.
Thanks a lot!
Re: Extract coordinates AND color
That's trigonometry.
You will have to write a function that takes as input:
You will have to write a function that takes as input:
- pixels position x
- pixel position y
- pixel distance from sensor origin
- 3D position x/y/z relative to the sensor origin