vision sensor & UI

Typically: "How do I... ", "How can I... " questions
markusi
Posts: 11
Joined: 19 Apr 2013, 09:47

vision sensor & UI

Post by markusi » 24 May 2013, 08:38

Hi,

I have problems creating an UI which is associated with a vision sensor and can display the data it acquires. I did following :

1.) I add a sensor to a scene (with an object e.g. a robot in the view cone)
2.) Then I create an UI with same size cell count as sensor resolution (horizontal and vertical).
3.) Then I associate the UI with the vision sensor.
4.) I start the simlation and select the sensor in order to make the UI appear.

I expected the vision sensor image to be visible in the UI which is not. Where's my fault ?

regards

Markus

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: vision sensor & UI

Post by coppelia » 24 May 2013, 08:59

Hello Markus,

Here is what you should do:
  • Create a vision sensor
  • Create a custom user interface
  • Create a surface (i.e. merged button) in the custom UI that has the same proportions as the vision sensor's resolution. The resolution of the surface is not important.
  • With the surface selected, check Transparent / show background texture in this dialog (lower part). Then click Set button texture, then Select texture from existing textures, and select the name of the vision sensor (something like visionSensor [resX x resY] (dynamic texture))
That's it!

A simpler way of displaying the content of a vision sensor is to add an auxiliary view ([right popup --> Add --> Floating view]). Then select your vision sensor, then right click on the newly added floating view [right popup --> View --> Associate view with selected vision sensor].

Cheers

Pikapi
Posts: 14
Joined: 31 Jul 2013, 09:26

Re: vision sensor & UI

Post by Pikapi » 31 Jul 2013, 10:08

Hi, may i know how to get and view the data in vision sensor? For example the color code for red color, the size of the object, and etc. Thanks.

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: vision sensor & UI

Post by coppelia » 31 Jul 2013, 16:27

Hello,

You have several API functions to get vision sensor data. Have a look at:
For more complex information, use a vision sensor filter, and read the filter return data with simReadVisionSensor.

Cheers

Pikapi
Posts: 14
Joined: 31 Jul 2013, 09:26

Re: vision sensor & UI

Post by Pikapi » 01 Aug 2013, 18:13

Hi, by using the vision sensor filter, I am able to make the robot move to a grape. When i Put a bunch of grape, the vision sensor can't differentiate each grape but see the bunch of grapes as one object. May i know how to differentiate each grapes? I had added the "Edge detection on work image" but it doesn't work. Can i combine the shortest distance data with the vision sensor for the robot arm to make a selection on which grape to pick first? Thanks.

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: vision sensor & UI

Post by coppelia » 01 Aug 2013, 19:30

Hello,

For the image processing implementation, we cannot help you. You can however access the image content and the depth buffer content of a vision sensor. Using those two arrays should allow you to write an own image processing filter or image processing algorithm.

Cheers

hassan
Posts: 10
Joined: 20 Aug 2013, 12:31

Re: vision sensor & UI

Post by hassan » 20 Aug 2013, 12:45

Hello,

Is there a direct way to write the data received from the vision and depth sensors of a Kinect camera to files i.e. using the embedded script?

Regards

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: vision sensor & UI

Post by coppelia » 20 Aug 2013, 16:48

Hello,

yes, you can simply use a write construct similar to this one.

Cheers

hassan
Posts: 10
Joined: 20 Aug 2013, 12:31

Re: vision sensor & UI

Post by hassan » 21 Aug 2013, 09:33

Hello,

Thanks a lot for your reply. I am a bit confused though since its really been just 1-2 days since i started using v-rep. So what i actually want is to add 4-6 Kinects and some random object in scene, in the simulation i want to get the RGB image and Depth image from each Kinect and write them to files. And of course i want both of them to be of same resolution and mapped with each other or at least i should have the information of extrinsic parameters relating the two images so that i can do the mapping myself.

Now when i add the Kinect and read the associated script i see that the 'kinect_visionSensor' is actually used to display the depth image, and 'kinect_Camera' is used to display the RGB image. As mentioned in the documentation Vision sensor has a fixed resolution which of course i can check by using:

res=simGetVisionSensorResolution(depthCam)

But i can't do the same for 'kinect_Camera'. Simiarly i can somehow access the depth image by using image=simGetVisionSensorImage(depthCam), though i think i should actually use the function simGetVisionSensorDepthBuffer. But i don't know how to get the RGB-image from the 'kinect-Camera'. It will be great if you can kindly help me in learning how to get the mapped depth and RGB data from a Kinect.

Best regards

coppelia
Site Admin
Posts: 7388
Joined: 14 Dec 2012, 00:25

Re: vision sensor & UI

Post by coppelia » 21 Aug 2013, 09:51

Hello,

I see your problem. A vision sensor in V-REP has a resolution and can perform image processing. But a camera in V-REP has no resolution (you can adjust the size of the view, and the number of displayed pixels will automatically adjust). So in your case, you have several possibilities:
  • you could copy and paste the existing vision sensor, and modify its filter in order to display its RGB content. Then you still need to read the desired information (see below)
  • or (and this is more direct), you can simply use the same vision sensor to read the depth map (with simGetVisionSensorDepthBuffer) and the RGB content (with simGetVisionSensorImage)
Cheers

Post Reply