creators_name: Smagt, P. van der creators_name: Groen, F. editors_name: Omidvar, O. and Smagt type: bookchapter datestamp: 1998-07-03 lastmod: 2011-03-11 08:54:00 metadata_visibility: show title: Visual feedback in motion ispublished: pub subjects: bio-ani-cog subjects: comp-sci-mach-vis subjects: comp-sci-neural-nets subjects: comp-sci-robot full_text_status: public abstract: In this chapter we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end effector, should be positioned above a stationary target. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajector can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials. date: 1997 date_type: published publication: Neural Systems for Robotics publisher: Academic Press, Boston, Massachusetts pagerange: 37-73 refereed: FALSE citation: Smagt, P. van der and Groen, F. (1997) Visual feedback in motion. [Book Chapter] document_url: http://cogprints.org/493/2/SmaGro97.ps