Visual feedback in motion

Smagt, P. van der and Groen, F. (1997) Visual feedback in motion. [Book Chapter]

Full text available as:

[img] Postscript


In this chapter we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end effector, should be positioned above a stationary target. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajector can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials.

Item Type:Book Chapter
Subjects:Biology > Animal Cognition
Computer Science > Machine Vision
Computer Science > Neural Nets
Computer Science > Robotics
ID Code:493
Deposited By: van der Smagt, Patrick
Deposited On:03 Jul 1998
Last Modified:11 Mar 2011 08:54


Repository Staff Only: item control page