Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints

Bergmann K, Kahl S, Kopp S (2013)
In: Intelligent Virtual Agents. Aylett R, Krenn B, Pelachaud C, Shimodaira H (Eds); Lecture Notes in Artificial Intelligence, . Berlin/Heidelberg: Springer: 203-216.

Download
OA
Conference Paper | Published | English
Editor
Aylett, R. ; Krenn, B. ; Pelachaud, C. ; Shimodaira , H.
Abstract
This paper addresses the semantic coordination of speech and gesture, a major prerequisite when endowing virtual agents with convincing multimodal behavior. Previous research has focused on build- ing rule- or data-based models speci c for a particular language, culture or individual speaker, but without considering the underlying cognitive processes. We present a exible cognitive model in which both linguistic as well as cognitive constraints are considered in order to simulate natu- ral semantic coordination across speech and gesture. An implementation of this model is presented and rst simulation results, compatible with empirical data from the literature are reported.
Publishing Year
Conference
13th International Conference on Intelligent Virtual Agents
Location
Edinburgh, UK
Conference Date
2013-08-29 – 2013-08-31
PUB-ID

Cite this

Bergmann K, Kahl S, Kopp S. Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In: Aylett R, Krenn B, Pelachaud C, Shimodaira H, eds. Intelligent Virtual Agents. Lecture Notes in Artificial Intelligence. Berlin/Heidelberg: Springer; 2013: 203-216.
Bergmann, K., Kahl, S., & Kopp, S. (2013). Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In R. Aylett, B. Krenn, C. Pelachaud, & H. Shimodaira (Eds.), Lecture Notes in Artificial Intelligence. Intelligent Virtual Agents (pp. 203-216). Berlin/Heidelberg: Springer.
Bergmann, K., Kahl, S., and Kopp, S. (2013). “Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints” in Intelligent Virtual Agents, ed. R. Aylett, B. Krenn, C. Pelachaud, and H. Shimodaira Lecture Notes in Artificial Intelligence (Berlin/Heidelberg: Springer), 203-216.
Bergmann, K., Kahl, S., & Kopp, S., 2013. Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In R. Aylett, et al., eds. Intelligent Virtual Agents. Lecture Notes in Artificial Intelligence. Berlin/Heidelberg: Springer, pp. 203-216.
K. Bergmann, S. Kahl, and S. Kopp, “Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints”, Intelligent Virtual Agents, R. Aylett, et al., eds., Lecture Notes in Artificial Intelligence, Berlin/Heidelberg: Springer, 2013, pp.203-216.
Bergmann, K., Kahl, S., Kopp, S.: Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints. In: Aylett, R., Krenn, B., Pelachaud, C., and Shimodaira , H. (eds.) Intelligent Virtual Agents. Lecture Notes in Artificial Intelligence. p. 203-216. Springer, Berlin/Heidelberg (2013).
Bergmann, Kirsten, Kahl, Sebastian, and Kopp, Stefan. “Modeling the semantic coordination of speech and gesture under cognitive and linguistic constraints”. Intelligent Virtual Agents. Ed. R. Aylett, B. Krenn, C. Pelachaud, and H. Shimodaira. Berlin/Heidelberg: Springer, 2013. Lecture Notes in Artificial Intelligence. 203-216.
Main File(s)
Access Level
OA Open Access
Last Uploaded
2016-03-09T09:54:39Z

This data publication is cited in the following publications:
This publication cites the following data publications:
External material:
Supplementary Material
Description
Production demo video A demo video of our cognitive model of speech & gesture production.

Export

0 Marked Publications

Open Data PUB

Search this title in

Google Scholar
ISBN Search