Point To Go Navigation of an assistive robot

Post links to your CoppeliaSim videos, scenes and models
Post Reply
Eric
Posts: 186
Joined: 11 Feb 2013, 16:39

Point To Go Navigation of an assistive robot

Post by Eric »

http://youtu.be/C_WsIYFSHcE
The operator interacts with a robot (simulated in this case) navigating through a Graphical User Interface (GUI view) using his facial muscles activity (as seen in http://youtu.be/pvUpKrXlnA4 ). The selection of a button to click is done by first selecting the line of the target button, then by validating it inside the selected line. This reduces the number of interaction required to reach the target button of having only few (2 to 4) channels to interact with the GUI.

Various interaction menus are available through the GUI allowing several type of robot's control type, from fully manual to fully autonomous and shared control. In this video the operator selects the Point to Go control mode.

In the POINT TO GO control sub-menu, the operator uses an augmented reality reconstruction of the robot's surrounding overlaid on the video-feed of the scene in front of
the robot (in this case, the virtual camera on the simulated robot seen on the global view, but it could be a real robot with a real video feed as seen in http://youtu.be/m4Jh8bBCAGc and http://youtu.be/TgdHDtS0N34). The operator can select its target (big blue square i.e. validation button) through a set of predefined destinations (small blue squares) over a (1.5m, 2.5m, 5m) × (18◦, 9◦, 0◦, 9◦, 18◦) polar grid referenced on the robot.

The obstacles detected from the built of a 2-D map, constructed in real time by the laser range finder, are represented as translucent walls, hiding the unreachable destination (i.e. behind the obstacle wall or too close to it). The red line defines the frontier between the translucent walls and the floor. Upon a validation of a destination button, a metrical path planing algorithm computes the path from the current robot's position to the target pointed position on the ground. The generated metrical path is a set of way points, from the robot's current location, toward the target. The metrical path is computed from the 3D model of the environment with a Rapidly-exploring Random Tree search (RRT) algorithm. Once the metrical path is acquired, the robot is controlled to follow it. If an unexpected obstacle is detected on the way, the robot will avoid the obstacle using Braitenberg obstacle avoidance algorithm and ask for a re-plan of its path. The default orientation at goal is defined by the vector built by the robot's current position to the target point. This orientation at goal can be modified (by a 20◦ inc/decrement, even before reaching the goal) by pressing the turn buttons on the first line. If a stop button is pressed, the target's pose is set to overlap the current pose of the robot. This makes the path following algorithm reach an end, hence stopping the robot. The front and back buttons of the first line helps to adjust the robot's position by going forward or backward by increments of 0.3 m.

Post Reply