Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction
Lang C (2012)
Bielefeld: Bielefeld University.
Bielefelder E-Dissertation | Englisch
Download
Autor*in
Gutachter*in / Betreuer*in
Einrichtung
Abstract / Bemerkung
In this dissertation, we investigate facial communicative signals (FCSs) in terms of valence recognition in task-oriented human-robot interaction. Facial communicative signals mainly comprise head gestures, eye gaze, and facial expressions. We review important psychological findings about the human display and perception of FCSs. Based on this discussion, several conclusions are drawn that motivate the presented work.
We investigate a FCS recognition in terms of positive or negative valence in an object-teaching scenario where human subjects teach objects to a robot. The correct or wrong answer of the robot when queried for the object name is used to define the ground truth data for the FCSs the humans displayed in turn during their reaction to this answer. Thus, the facial display the human showed after the robot classified an object correctly is treated as an example of the positive or success class. Similarly, the FCSs shown after a wrong answer constitutes an example of the negative or failure class. We evaluated to which degree humans can infer whether the answer of the robot was correct or not from looking at these facial displays only.
Furthermore, we present a simple static baseline approach for the automatic classification of these facial displays in terms of valence. It is based on feature extraction with active appearance models (AAMs) and a classification with support vector machines (SVMs). The method does not consider temporal dynamics, but uses a simple majority voting scheme over the classification results for the single frames.
This simple static approach yielded baseline results for a more sophisticated dynamic approach. The dynamic approach is based on the selection of discriminative reference subsequences as prototypes in a nearest-neighbor-based classification scheme. The temporal dynamics are considered by means of dynamic time warping (DTW), which is used to compare sequences of AAM feature vectors.
In the conducted evaluation, this dynamic FCS recognition approach outperformed the static baseline approach and achieved human level classification accuracies in a person-specific classification.
Jahr
2012
Seite(n)
192
Page URI
https://pub.uni-bielefeld.de/record/2534412
Zitieren
Lang C. Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction. Bielefeld: Bielefeld University; 2012.
Lang, C. (2012). Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction. Bielefeld: Bielefeld University.
Lang, Christian. 2012. Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction. Bielefeld: Bielefeld University.
Lang, C. (2012). Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction. Bielefeld: Bielefeld University.
Lang, C., 2012. Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction, Bielefeld: Bielefeld University.
C. Lang, Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction, Bielefeld: Bielefeld University, 2012.
Lang, C.: Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction. Bielefeld University, Bielefeld (2012).
Lang, Christian. Facial Communicative Signals: valence recognition in task-oriented human-robot Interaction. Bielefeld: Bielefeld University, 2012.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Copyright Statement:
Dieses Objekt ist durch das Urheberrecht und/oder verwandte Schutzrechte geschützt. [...]
Volltext(e)
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-25T06:41:38Z
MD5 Prüfsumme
ea1271e137052fc3584aeedb12bc5d49