Smagt, P. van der and Groen, F. (1997) Visual feedback in motion. [Book Chapter]
Full text available as:
Postscript
1246Kb |
Abstract
In this chapter we introduce a method for model-free monocular visual guidance of a robot arm. The robot arm, with a single camera in its end effector, should be positioned above a stationary target. It is shown that a trajectory can be planned in visual space by using components of the optic flow, and this trajector can be translated to joint torques by a self-learning neural network. No model of the robot, camera, or environment is used. The method reaches a high grasping accuracy after only a few trials.
Item Type: | Book Chapter |
---|---|
Subjects: | Biology > Animal Cognition Computer Science > Machine Vision Computer Science > Neural Nets Computer Science > Robotics |
ID Code: | 493 |
Deposited By: | van der Smagt, Patrick |
Deposited On: | 03 Jul 1998 |
Last Modified: | 11 Mar 2011 08:54 |
Metadata
- ASCII Citation
- Atom
- BibTeX
- Dublin Core
- EP3 XML
- EPrints Application Profile (experimental)
- EndNote
- HTML Citation
- ID Plus Text Citation
- JSON
- METS
- MODS
- MPEG-21 DIDL
- OpenURL ContextObject
- OpenURL ContextObject in Span
- RDF+N-Triples
- RDF+N3
- RDF+XML
- Refer
- Reference Manager
- Search Data Dump
- Simple Metadata
- YAML
Repository Staff Only: item control page