Conversion of Depth Map to 3D point cloud using Kinect

Typically: "How do I... ", "How can I... " questions
coppelia
Site Admin
Posts: 6218
Joined: 14 Dec 2012, 00:25

Re: Conversion of Depth Map to 3D point cloud using Kinect

Post by coppelia » 07 Jun 2017, 17:23

Forgot about another probably a bit faster way: this would be to pack the float data into a bytearray for the simxCallScriptFunction buffer argument. Then, you can quickly obtain the individual arrays back inside of a child script with those unpacking functions.

Cheers

ahundt
Posts: 112
Joined: 29 Jan 2015, 04:21

Re: Conversion of Depth Map to 3D point cloud using Kinect

Post by ahundt » 12 Nov 2017, 01:11

Thanks I’ve been able to get point cloud display working from python. I’m still having trouble getting sensor images to display in vrep from the same external source though, and I’ve made sure the external data source box is set correctly. The image window from the Kinect sensor object just remains gray and blank. The data itself seems good since I can display full color point clouds now.

coppelia
Site Admin
Posts: 6218
Joined: 14 Dec 2012, 00:25

Re: Conversion of Depth Map to 3D point cloud using Kinect

Post by coppelia » 13 Nov 2017, 22:26

Can you make it display an different image from within the child script? Do you think your problem is related to the usage of the remote API? How do you apply the image to the vision sensor?

Cheers

ahundt
Posts: 112
Joined: 29 Jan 2015, 04:21

Re: Conversion of Depth Map to 3D point cloud using Kinect

Post by ahundt » 20 Nov 2017, 00:51

I've done a lot towards making point cloud and image display from python work properly, but I've run into some surprising behavior.

I'm trying to display an image that looks like this along with its point cloud:
Image

However, if I simply load the image up into a numpy array with uint8 values and send it over to V-REP I get this:
Image

I've verified that if I display the images with utilities other than V-REP and/or save the images out to a png file they look correct. I believe at this point I've narrowed down the source of the discrepancy to V-REP itself.

I need to rotate the image by 180 degrees, flip it left over right, then invert the colors to get something close to correct to display:

Image

The bottom image on the left side has been rotated 180 and flipped left right but not inverted. The top two images on the left side were rotated, flipped left right, and inverted. As you can see even after applying a manual fix (the top two color images in the left column) it seems to end up with some image corruption after I apply the inversion which is not visible .

It also seems that I need to do the rotate 180 and flip left right operation for the display of point clouds, but the color inversion is not necessary. Also note that I had to implement my own custom remote API function to get this far because simxSetVisionSensorImage had no effect at all, so I used vrep.simxCallScriptFunction to implement my own transfer for both the images and the point clouds. Perhaps

Is this flipped and rotated coordinate system something that should be expected for images in V-REP? Perhaps this is a row major vs column major difference?

coppelia
Site Admin
Posts: 6218
Joined: 14 Dec 2012, 00:25

Re: Conversion of Depth Map to 3D point cloud using Kinect

Post by coppelia » 20 Nov 2017, 18:40

Strange that simxSetVisionSensorImage doesn't work for you. Can you try with following code and the rosInterfaceTopicPublisherAndSubscriber.ttt demo scene?

Code: Select all

import vrep

print 'Program started'
vrep.simxFinish(-1) # just in case, close all opened connections
clientID=vrep.simxStart('127.0.0.1',19997,True,True,5000,5)
if clientID!=-1:
    print 'Connected to remote API server'
    res,v0=vrep.simxGetObjectHandle(clientID,'Vision_sensor',vrep.simx_opmode_oneshot_wait)
    res,v1=vrep.simxGetObjectHandle(clientID,'PassiveVision_sensor',vrep.simx_opmode_oneshot_wait)

    res,resolution,image=vrep.simxGetVisionSensorImage(clientID,v0,0,vrep.simx_opmode_streaming)
    while (vrep.simxGetConnectionId(clientID)!=-1):
        res,resolution,image=vrep.simxGetVisionSensorImage(clientID,v0,0,vrep.simx_opmode_buffer)
        if res==vrep.simx_return_ok:
            res=vrep.simxSetVisionSensorImage(clientID,v1,image,0,vrep.simx_opmode_oneshot)
    vrep.simxFinish(clientID)
else:
    print 'Failed connecting to remote API server'
print 'Program ended'
If you run the scene, many errors might display in the status bar if ROS is not running, but that doesn't really matter: the image is read by the remote API client and sent back.

As to why the image is inverted, I have no idea. I know that RGB order might be different depending on your image processing library (e.g. OpenCV). In next release of V-REP you will be able to use following API functions and constants for your problems:

sim.transformBuffer (e.g. with sim.buffer_uint8rgb and sim.buffer_uint8bgr for instance)
sim.transformImage (for x/y flipping)
sim.combineRgbImages (for fast vert/horiz. image concatenation)

The beta should hopefully be out in 2-3 weeks...

Cheers

Post Reply