Feedback Interpretation based on Facial Expressions in Human–Robot Interaction

Lang C, Hanheide M, Lohse M, Wersing H, Sagerer G (2009)
In: International Symposium on Robot and Human Interactive Communication (RO-MAN'09). Toyama, Japan: IEEE: 189-194.

Download
Campus/VPN
Conference Paper | Published | English
Abstract
In everyday conversation besides speech people also communicate by means of nonverbal cues. Facial expressions are one important cue, as they can provide useful information about the conversation, for instance, whether the interlocutor seems to understand or appears to be puzzled. Similarly, in human-robot interaction facial expressions also give feedback about the interaction situation. We present a Wizard of Oz user study in an object-teaching scenario where subjects showed several objects to a robot and taught the objects’ names. Afterward, the robot should term the objects correctly. In a first evaluation, we let other people watch short video sequences of this study. They decided by looking at the face of the human whether the answer of the robot was correct (unproblematic situation) or incorrect (problematic situation). We conducted the experiments under specific conditions by varying the amount of temporal and visual context information and compare the results with related experiments described in the literature.
Publishing Year
Conference
18th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN’09)
Location
Toyama, Japan
PUB-ID

Cite this

Lang C, Hanheide M, Lohse M, Wersing H, Sagerer G. Feedback Interpretation based on Facial Expressions in Human–Robot Interaction. In: International Symposium on Robot and Human Interactive Communication (RO-MAN'09). Toyama, Japan: IEEE; 2009: 189-194.
Lang, C., Hanheide, M., Lohse, M., Wersing, H., & Sagerer, G. (2009). Feedback Interpretation based on Facial Expressions in Human–Robot Interaction. International Symposium on Robot and Human Interactive Communication (RO-MAN'09), 189-194.
Lang, C., Hanheide, M., Lohse, M., Wersing, H., and Sagerer, G. (2009). “Feedback Interpretation based on Facial Expressions in Human–Robot Interaction” in International Symposium on Robot and Human Interactive Communication (RO-MAN'09) (Toyama, Japan: IEEE), 189-194.
Lang, C., et al., 2009. Feedback Interpretation based on Facial Expressions in Human–Robot Interaction. In International Symposium on Robot and Human Interactive Communication (RO-MAN'09). Toyama, Japan: IEEE, pp. 189-194.
C. Lang, et al., “Feedback Interpretation based on Facial Expressions in Human–Robot Interaction”, International Symposium on Robot and Human Interactive Communication (RO-MAN'09), Toyama, Japan: IEEE, 2009, pp.189-194.
Lang, C., Hanheide, M., Lohse, M., Wersing, H., Sagerer, G.: Feedback Interpretation based on Facial Expressions in Human–Robot Interaction. International Symposium on Robot and Human Interactive Communication (RO-MAN'09). p. 189-194. IEEE, Toyama, Japan (2009).
Lang, Christian, Hanheide, Marc, Lohse, Manja, Wersing, Heiko, and Sagerer, Gerhard. “Feedback Interpretation based on Facial Expressions in Human–Robot Interaction”. International Symposium on Robot and Human Interactive Communication (RO-MAN'09). Toyama, Japan: IEEE, 2009. 189-194.
Main File(s)
Access Level
Campus/VPN UniBi Only
Last Uploaded
2014-06-17 07:51:00

This data publication is cited in the following publications:
This publication cites the following data publications:

Export

0 Marked Publications

Open Data PUB

Search this title in

Google Scholar