Recently, I try to use depth sensor to calculate a world position of a pixel. I see the sensor is different from real camera. I am not sure if the method that is used to calculate a position by perspective angel and depth is right.
Code: Select all
camXHalfAngle = camXAngleInDegrees * 0.5 * math.pi/180
camYHalfAngle = math.atan(math.tan(camXAngleInDegrees*0.5* camYResolution/camXResolution))*math.pi/180
nearClippingPlane = 0.01
depthAmplitude = 3.5
xAngle = ((320-u+0.5)/320)*camXHalfAngle
yAngle = ((240-v+0.5)/240)*camYHalfAngle
depthValue = image_depth[v-1][u-1] # u and v are the coordinate of the pixel. because the depth is turned so [v-1]#[u-1]
zCoord = nearClippingPlane+depthAmplitude * depthValue
xCoord = math.tan(xAngle) * zCoord
yCoord = math.tan(yAngle) * zCoord
pixels_coordinare_kinetic = np.mat([[xCoord],[yCoord],[zCoord],[1]], np.float)
zhuang