object detection by Vision Sensor

Typically: "How do I... ", "How can I... " questions
Post Reply
Sahba
Posts: 3
Joined: 28 Mar 2013, 10:36

object detection by Vision Sensor

Post by Sahba » 29 Mar 2013, 12:23

Hello,
I am using V-Rep for my robotic project. My project consists of a conveyor, a robot, and some objects on the conveyor. I need to simulate vision sensor part of my project. For this part, a vision sensor is above a conveyor. Those objects are on the conveyor and moving. When the vision sensor sees the objects (blobs) it should send a command to a robot. How can vision sensor detect the objects and send a command to the robot pragramatically (using Lua)? which functions should I use and how? I tried some functions but I got just errors.
Thank you very much for your help and time.

coppelia
Site Admin
Posts: 7020
Joined: 14 Dec 2012, 00:25

Re: object detection by Vision Sensor

Post by coppelia » 29 Mar 2013, 12:44

Hello Sahba,

You have to be careful with your vision sensors and the processing filters that you have set-up. Some filters return values, others don't. If you have several filters returning values, you have to be careful to retrieve them at the right position.

Have a look at the demo scene blobDetectionWithPickAndPlace. Double-click the script icon next to object blobDetectionCamera to open the child script attached to that object.

Around line 20, you have the instruction that reads the vision sensor with all the values it returns:

Code: Select all

result,t0,t1=simReadVisionSensor(camera) -- Here we read the image processing camera!
result and to are always returned, by every vision sensor. But since that vision sensor has a filter that does blob detection and returns values, we have a 3rd return value: t1

What kind of values are returned in t1 is explained in the blob detection filter dialog: in this dialog, just double-click the "Blob detection on work image" filter to open its dialog.

And so basically:
  • t1[1]=blob count
  • t1[2]=value count per blob (vCnt)
  • t1[3]=blob1 size
  • t1[4]=blob1 orientation
  • t1[5]=blob1 position x
  • t1[6]=blob1 position y
  • t1[7]=blob1 width
  • t1[8]=blob1 height
  • t1[9]=blob2 size
  • etc.
vCnt is always 6 (for now). But use it anyways to calculate the data position of blob x, since a future release of V-REP might add more values for each blob.

Cheers

Sahba
Posts: 3
Joined: 28 Mar 2013, 10:36

Re: object detection by Vision Sensor

Post by Sahba » 30 Mar 2013, 00:21

Thank you for your reply.
I saw the video you mentioned but could not access its code. It was on the youtube, wasn't it? I think I need a more basic explanation. Let me explain what I have done so far and sorry if I am too wordy.

The first part that I want to explain are the filters that I used in the filter dialog. My filter dialog looks like below:

Original Image to work Image
edge detection on work image
blob detection on work image
selective color on work image
work image to output image

I want to explain why I included 'edge detection on work image' and 'selective color on work image' filters rather than just using 'blob detection on work image'. I just need to count blobs from the vision sensor but just using "blob detection on work image" did not show any blobs in the vision sensor. Consequently, I had to add those two filters to 'blob detection on work image' to show my blobs. Does that make sense?


Further, vision sensor did have any child script associated with at first so I added a threaded child script to the vision sensor. I am using the following code in the child script associated with vision sensor: ( in the hierarchy my vision sensor appears with 'Vision_sensor' .)
I try to write a code that when a vision sensor sees three blobs on the conveyor, the robot should start moving.

simSetThreadSwitchTiming(2) -- Default timing for automatic thread switching
simDelegateChildScriptExecution()

handle=simGetObjectHandle("Vision_sensor#")

detectionCount= simHandleVisionSensor(?) --This part is my problematic part. What should I include in the parenthesis? In the API guideline it's been mentioned that a number should be included but when I put numbers like 0,1,-1 it returns the following error : Object does not exist. (simReadVisionSensor)

result,t0,t1=simReadVisionSensor(?) --Same as the above problem, What should I include in the parenthesis?

while (simGetSimulationState()~=sim_simulation_advancing_abouttostop) do

if (t1[1]==3) then
VelRight=2 --VelRight is the velocity of the right wheel of the robot, VelRight is the parameter in the child script of the robot, can I used it directly in this child script which is the child script of the vision sensor?
VelLeft=2 --VelLeft is the velocity of the left wheel of the robot
end

Thank you very much for your time and your big help.

coppelia
Site Admin
Posts: 7020
Joined: 14 Dec 2012, 00:25

Re: object detection by Vision Sensor

Post by coppelia » 30 Mar 2013, 00:48

Sahba,

You can access the scene from the installation (i.e. when you download V-REP PRO EDU you will have a scenes folder installed. In there you can find the blobDetectionWithPickAndPlace.ttt scene. Make sure you inspect its various child scripts.

Then, you are right, most of the time you will need additional filters with the blob detection filter. But I wouldn't use the edge detection. Remove it and move the "selective color on work image" one position up. make sure you adjust it appropriately (have an inspiration by looking at the vision sensor in above mentioned scene).

Then, try to use a non-threaded child script. Or then manually switch the thread at each loop pass with simSwitchThread. And do not call simHandleVisionSensor. Instead call simReadVisionSensor. Both of those functions require a vision sensor handle as first argument, make sure you refer to the API documentation.

Cheers

Sahba
Posts: 3
Joined: 28 Mar 2013, 10:36

Re: object detection by Vision Sensor

Post by Sahba » 02 Apr 2013, 22:38

Thank you for your time. It works now. I mean the vision sensor can detect the blobs, counting the blobs and returning the true value for blob counting. I just have another question. I want to go further and do the following action. When the vision sensor saw (for examples 5 blobs), it sends a command to my robot and the robot should start moving. How can I do that? I use threaded script for my robot and non-threaded script for my vision sensor. Which functions should I use? should I use these function in my vision sensor script or my robot script? I think I should use those functions in my robot script.

Thank you very much for your big help.

coppelia
Site Admin
Posts: 7020
Joined: 14 Dec 2012, 00:25

Re: object detection by Vision Sensor

Post by coppelia » 02 Apr 2013, 22:44

Hello Sahba,

You now basically need to have your two script communicate. There are many possibilities to achieve that, but the easiest (and maybe not so elegant) is to use signals. One script will set a signal value, the other will read it, and clear it.
Another more elegant way, is to use tubes. Those are buffers that can be fed on one side, and read/emptied on the other side.

Other than that, make sure you read the full section about the various means of communication.

Cheers

Pikapi
Posts: 14
Joined: 31 Jul 2013, 09:26

Re: object detection by Vision Sensor

Post by Pikapi » 02 Aug 2013, 10:42

Hi, the filters i used as below

original image to work image
selective color on work image (double-click to edit)
offset and scale colors on work image (double-click to edit)
swap work image with buffer 1
Binary work image and trigger (double-click to edit)
Blob detection on work image (double-click to edit)
Offset and scale colors on work image (double-click to edit)
Add buffer 1 to work image
Original image to work image
work image to output image

There is two types of return values returned in t1 which is in "Binary work image and trigger" and "Blob detection on work image" filter.

the code is : result,t0,t1=simReadVisionSensor(xxxVisionSensor)

May i know how to get the return values in both filters? Thanks.

coppelia
Site Admin
Posts: 7020
Joined: 14 Dec 2012, 00:25

Re: object detection by Vision Sensor

Post by coppelia » 02 Aug 2013, 11:14

Hello,

in your case, read the return values with:

Code: Select all

result,t0,t1,t2=simReadVisionSensor(visionSensorHandle)
Where data related to filter Binary work image and trigger is located in t1, and data related to Blob detection on work image is located in t2.

Typically, from a child script (Lua):
  • t1[1]=proportion
  • t1[2]=x position of center of mass
  • t1[3]=y position of center of mass
  • t1[4]=orientation
  • t1[5]=roundness
and
  • t2[1]=blob count
  • t2[2]=n=value count per blob
  • t2[3]=blob 1 size
  • t2[4]=blob 1 orientation
  • t2[5]=blob 1 position x
  • t2[6]=blob 1 position y
  • t2[7]=blob 1 width
  • t2[8]=blob 1 height
  • t2[2+n+1]=blob 2 size
  • t2[2+n+2]=blob 2 orientation
  • ...
Cheers

Post Reply