Learning visuomotor transformations for gaze-control and grasping

Hoffmann H, Schenck W, Möller R (2005)
Biological Cybernetics 93(2): 119-130.

Journal Article | Published | English

No fulltext has been uploaded

Author
Abstract
For reaching to and grasping of an object, visual information about the object must be transformed into motor or postural commands for the arm and hand. In this paper, we present a robot model for visually guided reaching and grasping. The model mimics two alternative processing pathways for grasping, which are also likely to coexist in the human brain. The first pathway directly uses the retinal activation to encode the target position. In the second pathway, a saccade controller makes the eyes (cameras) focus on the target, and the gaze direction is used instead as positional input. For both pathways, an arm controller transforms information on the target's position and orientation into an arm posture suitable for grasping. For the training of the saccade controller, we suggest a novel staged learning method which does not require a teacher that provides the necessary motor commands. The arm controller uses unsupervised learning: it is based on a density model of the sensor and the motor data. Using this density, a mapping is achieved by completing a partially given sensorimotor pattern. The controller can cope with the ambiguity in having a set of redundant arm postures for a given target. The combined model of saccade and arm controller was able to fixate and grasp an elongated object with arbitrary orientation and at arbitrary position on a table in 94% of trials.
Publishing Year
ISSN
eISSN
PUB-ID

Cite this

Hoffmann H, Schenck W, Möller R. Learning visuomotor transformations for gaze-control and grasping. Biological Cybernetics. 2005;93(2):119-130.
Hoffmann, H., Schenck, W., & Möller, R. (2005). Learning visuomotor transformations for gaze-control and grasping. Biological Cybernetics, 93(2), 119-130.
Hoffmann, H., Schenck, W., and Möller, R. (2005). Learning visuomotor transformations for gaze-control and grasping. Biological Cybernetics 93, 119-130.
Hoffmann, H., Schenck, W., & Möller, R., 2005. Learning visuomotor transformations for gaze-control and grasping. Biological Cybernetics, 93(2), p 119-130.
H. Hoffmann, W. Schenck, and R. Möller, “Learning visuomotor transformations for gaze-control and grasping”, Biological Cybernetics, vol. 93, 2005, pp. 119-130.
Hoffmann, H., Schenck, W., Möller, R.: Learning visuomotor transformations for gaze-control and grasping. Biological Cybernetics. 93, 119-130 (2005).
Hoffmann, H, Schenck, Wolfram, and Möller, Ralf. “Learning visuomotor transformations for gaze-control and grasping”. Biological Cybernetics 93.2 (2005): 119-130.
This data publication is cited in the following publications:
This publication cites the following data publications:

3 Citations in Europe PMC

Data provided by Europe PubMed Central.

Computational Models for Neuromuscular Function.
Valero-Cuevas FJ, Hoffmann H, Kurse MU, Kutch JJ, Theodorou EA., IEEE Rev Biomed Eng 2(), 2009
PMID: 21687779
Perception through visuomotor anticipation in a mobile robot.
Hoffmann H., Neural Netw 20(1), 2007
PMID: 17010571

53 References

Data provided by Europe PubMed Central.


AUTHOR UNKNOWN, 0
Are arm trajectories planned in kinematic or dynamic coordinates? An adaptation study.
Wolpert DM, Ghahramani Z, Jordan MI., Exp Brain Res 103(3), 1995
PMID: 7789452

Export

0 Marked Publications

Open Data PUB

Web of Science

View record in Web of Science®

Sources

PMID: 16028074
PubMed | Europe PMC

Search this title in

Google Scholar