From Bottom-Up Visual Attention to Robot Action Learning

Nagai Y (2009)
In: The 8th IEEE International Conference on Development and Learning. Institute of Electrical and Electronics Engineers (Ed); Piscataway, NJ: IEEE.

Konferenzbeitrag | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Nagai, Yukie
herausgebende Körperschaft
Institute of Electrical and Electronics Engineers
Abstract / Bemerkung
This research addresses the challenge of developing an action learning model employing bottom-up visual attention. Although bottom-up attention enables robots to autonomously explore the environment, learn to recognize objects, and interact with humans, the instability of their attention as well as the poor quality of the information detected at the attentional location has hindered the robots from processing dynamic movements. In order to learn actions, robots have to stably attend to the relevant movement by ignoring noises while maintaining sensitivity to a new important movement. To meet these contradictory requirements, I introduce mechanisms for retinal filtering and stochastic attention selection inspired by human vision. The former reduces the complexity of the peripheral vision and thus enables robots to focus more on the currently-attended location. The latter allows robots to flexibly shift their attention to a new prominent location, which must be relevant to the demonstrated action. The signals detected at the attentional location are then enriched based on the spatial and temporal continuity so that robots can learn to recognize objects, movements, and their associations. Experimental results show that the proposed system can extract key actions from human action demonstrations.
Erscheinungsjahr
2009
Titel des Konferenzbandes
The 8th IEEE International Conference on Development and Learning
Konferenzort
Shanghai, China
Konferenzdatum
2009-06-05
ISBN
9781424441174
Page URI
https://pub.uni-bielefeld.de/record/1890428

Zitieren

Nagai Y. From Bottom-Up Visual Attention to Robot Action Learning. In: Institute of Electrical and Electronics Engineers, ed. The 8th IEEE International Conference on Development and Learning. Piscataway, NJ: IEEE; 2009.
Nagai, Y. (2009). From Bottom-Up Visual Attention to Robot Action Learning. In Institute of Electrical and Electronics Engineers (Ed.), The 8th IEEE International Conference on Development and Learning Piscataway, NJ: IEEE. https://doi.org/10.1109/devlrn.2009.5175517
Nagai, Yukie. 2009. “From Bottom-Up Visual Attention to Robot Action Learning”. In The 8th IEEE International Conference on Development and Learning, ed. Institute of Electrical and Electronics Engineers. Piscataway, NJ: IEEE.
Nagai, Y. (2009). “From Bottom-Up Visual Attention to Robot Action Learning” in The 8th IEEE International Conference on Development and Learning, Institute of Electrical and Electronics Engineers ed. (Piscataway, NJ: IEEE).
Nagai, Y., 2009. From Bottom-Up Visual Attention to Robot Action Learning. In Institute of Electrical and Electronics Engineers, ed. The 8th IEEE International Conference on Development and Learning. Piscataway, NJ: IEEE.
Y. Nagai, “From Bottom-Up Visual Attention to Robot Action Learning”, The 8th IEEE International Conference on Development and Learning, Institute of Electrical and Electronics Engineers, ed., Piscataway, NJ: IEEE, 2009.
Nagai, Y.: From Bottom-Up Visual Attention to Robot Action Learning. In: Institute of Electrical and Electronics Engineers (ed.) The 8th IEEE International Conference on Development and Learning. IEEE, Piscataway, NJ (2009).
Nagai, Yukie. “From Bottom-Up Visual Attention to Robot Action Learning”. The 8th IEEE International Conference on Development and Learning. Ed. Institute of Electrical and Electronics Engineers. Piscataway, NJ: IEEE, 2009.
Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Suchen in

Google Scholar
ISBN Suche