In this video, the camera detects the spatial pose and posture of the augmented reality tag. This information is fed to a Shunk LWA3 manipulator controller (inside the simulator) in order to keep a distance and track the motion of the tag. Should the tag be on a door, and the camera fixed to the end tip of the manipulator, the controller would actuate the mobile base and then the arm, to go to an accurate target position where the knob of the doors is, and then to start a prerecorded door opening sequence.
The simulation computes the inverse kinematic of the manipulator (required angles at each joints to reach the desired end tip pose and posture). Those computed angles are sent to the real robot in real time.
Post links to your Youtube videos related to V-REP
1 post • Page 1 of 1