Towards an integrated model of speech and gesture production for multi-modal robot behavior

Salem M, Kopp S, Wachsmuth I, Joublin F (2010)
In: Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication. 649-654.

Conference Paper | Published | English

No fulltext has been uploaded

Abstract
The generation of communicative, speech-accompanying robot gesture is still largely unexplored. We present an approach to enable the humanoid robot ASIMO to flexibly produce speech and co-verbal gestures at run-time, while not being limited to a pre-defined repertoire of motor actions. Since much research has already been dedicated to this challenge within the domain of virtual conversational agents, we build upon the experience gained from the development of a speech and gesture production model used for the virtual human Max. We propose a robot control architecture building upon the Articulated Communicator Engine (ACE) that was developed to allow virtual agents to flexibly realize planned multi-modal behavior epresentations on the spot. Our approach tightly couples ACE with ASIMO’s perceptuo-motor system, combining conceptual representation and planning with motor control primitives for speech and arm movements of a physical robot body. First results of both gesture production and speech synthesis using ACE and the MARY text-to-speech system are presented and discussed.
Publishing Year
Conference
IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN 2010)
Location
Viareggio, Italy
PUB-ID

Cite this

Salem M, Kopp S, Wachsmuth I, Joublin F. Towards an integrated model of speech and gesture production for multi-modal robot behavior. In: Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication. 2010: 649-654.
Salem, M., Kopp, S., Wachsmuth, I., & Joublin, F. (2010). Towards an integrated model of speech and gesture production for multi-modal robot behavior. Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication, 649-654.
Salem, M., Kopp, S., Wachsmuth, I., and Joublin, F. (2010). “Towards an integrated model of speech and gesture production for multi-modal robot behavior” in Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication 649-654.
Salem, M., et al., 2010. Towards an integrated model of speech and gesture production for multi-modal robot behavior. In Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication. pp. 649-654.
M. Salem, et al., “Towards an integrated model of speech and gesture production for multi-modal robot behavior”, Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication, 2010, pp.649-654.
Salem, M., Kopp, S., Wachsmuth, I., Joublin, F.: Towards an integrated model of speech and gesture production for multi-modal robot behavior. Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication. p. 649-654. (2010).
Salem, Maha, Kopp, Stefan, Wachsmuth, Ipke, and Joublin, Frank. “Towards an integrated model of speech and gesture production for multi-modal robot behavior”. Proceedings of the 2010 IEEE International Symposium on Robot and Human Interactive Communication. 2010. 649-654.
This data publication is cited in the following publications:
This publication cites the following data publications:

Export

0 Marked Publications

Open Data PUB

Search this title in

Google Scholar
ISBN Search