Page 1 of 1

How can I use the Python Remote API and kinect sensor to enable a robot to pick up objects??

Posted: 19 May 2020, 03:09
by SS_Seagull
Hello!

I am a beginner to the CoppeliaSim environment, and would like to simulate a robot picking up an object.

For this I am using an IRB140 robot with a gripper and a Kinect . The kinect acquires RGB and Depth information and feed it to a Neural Network which detects the object and returns a rectangular bounding box (currently working thanks to other posts on this forum)

I would like to use this bounding box to calculate the target point (center of the rectangle) and feed it to the robot, which would then appropriately orient itself to grab the object.

The issue here is how do i convert the target point from a 2D location on an image into world coordinates?? And how do i use the Python Remote API to communicate with the robot?

Thank you!

Re: How can I use the Python Remote API and kinect sensor to enable a robot to pick up objects??

Posted: 22 May 2020, 11:00
by coppelia
Hello,

have a look at the model in your model library at Models/components/sensors/Blob to 3D position.ttm. There you have the calculations that transform an image position (X/Y) into a world position (X/Y/Z).

Have also a look at the simple sample Python programs in programming/remoteApiBindings/python and/or programming/b0RemoteApiBindings/python for examples how to connect and interact with CoppeliaSim from Python.

Cheers

Re: How can I use the Python Remote API and kinect sensor to enable a robot to pick up objects??

Posted: 25 May 2020, 04:09
by SS_Seagull
Hello!

Thank you for the reply!

I was hoping you could explain how this line in the code exactly works and how I could change it to work with kinect since it has two handles kinect_depth and kinect_rgb, and i am not too sure how exactly they work
local res,packet1,packet2=sim.handleVisionSensor(sensor)
what exactlly is going on here??

Re: How can I use the Python Remote API and kinect sensor to enable a robot to pick up objects??

Posted: 26 May 2020, 10:27
by coppelia
that line (i.e. local res,packet1,packet2=sim.handleVisionSensor(sensor)) basically acquires a new image with the vision sensor specified. After the aquisition of that image, a vision callback function will be automatically called (if available). Then the call returns.

In the kinect model available in the model library (in components/sensors/kinect.ttm), sim.handleVisionSensor is implicitely called via the main script with sim.handleVisionSensor(sim.handle_all_except_explicit), since both vision sensors have their explicit handling flag unchecked.

Cheers

Re: How can I use the Python Remote API and kinect sensor to enable a robot to pick up objects??

Posted: 01 Jun 2020, 08:53
by SS_Seagull
Hello!

How would I go about instructing an IRB140 to move by giving it values (x,y,z) of my target object's position?
I am currently using sim.simxSetObjectPosition on the ManipulatorSphere since this is the only function that I am aware of that accepts cartesian points as input..

Second, I am unable to connect an end effector to the IRB140 using the Assemble/Disassemble option. It always pops off when i start the simulation, but the kinect camera attaches correctly. Is there any other procedure I am unaware of here?

Third, is there any alternative way of finding the coordinates for objects using the kinect?

Re: How can I use the Python Remote API and kinect sensor to enable a robot to pick up objects??

Posted: 04 Jun 2020, 05:39
by coppelia
Hello,

have a look at sim.getConfigForTipPose: it will allow you to figure out which joint configurations can bring the end-effector into a desired pose. Related to that, check out the demo scene scenes/ik_fk_simple_examples/8-computingJointAnglesForRandomPoses.ttt. In next release, out normally next month, you will be able to build and solve IK tasks fully programmatically, also via scripting within CoppeliaSim.

The assemble/disassemble option works with prepared models. In case of the IRB140 model, you should attach the end-effector to object IRB140_connection. Then it should work. Read also this page which explains why your end-effector pops off.

About your third question: well, it depends of what code you are willing to write: the kinect gives you the raw data (e.g. image and depth map), your task is then to transform this into a 3D position. Using a blob detection filter, finding the center of the blob in the image, then converting that 2D position into a 3D position is one of the easier approach (see the example I suggested in my previous post).

Cheers

Re: How can I use the Python Remote API and kinect sensor to enable a robot to pick up objects??

Posted: 14 Jun 2020, 01:58
by SS_Seagull
I'm having a bit of trouble adding my end effector (Barett Hand simplified) to the IK group for the IRB140 so that the whole robot moves when i move the end effector (IRB140 is in mode 2, i.e. IK through manipulation sphere)

As far as my understanding goes, I'm trying to add the group associated with the proximity sensor, but its not showing up in the drop down menu.

Re: How can I use the Python Remote API and kinect sensor to enable a robot to pick up objects??

Posted: 15 Jun 2020, 07:42
by coppelia
I don't understand what you mean. Take an empty scene. Drag and drop model models/robots/non-mobile/ABB IRB 140.ttm into it. Then drag and drop model models/components/grippers/Barrett Hand (simplified).ttm into the scene. Keep the gripper selected, then ctrl-select object IRB140_connection and press the assemble/disassemble toolbar button: the gripper gets attached to the robot.

Run the simulation and move the green sphere: the robot and gripper behave as expected.

Cheers