Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance

Mitev N, Renner P, Pfeiffer T, Staudte M (2018)
Cognitive Research: Principles and Implications 3(3): 51.

Zeitschriftenaufsatz | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Mitev, Nikolina; Renner, PatrickUniBi ; Pfeiffer, ThiesUniBi ; Staudte, Maria
Abstract / Bemerkung
Referential success is crucial for collaborative task-solving in shared environments. In face-to-face interactions, humans, therefore, exploit speech, gesture, and gaze to identify a specific object. We investigate if and how the gaze behavior of a human interaction partner can be used by a gaze-aware assistance system to improve referential success. Specifically, our system describes objects in the real world to a human listener using on-the-fly speech generation. It continuously interprets listener gaze and implements alternative strategies to react to this implicit feedback. We used this system to investigate an optimal strategy for task performance: providing an unambiguous, longer instruction right from the beginning, or starting with a shorter, yet ambiguous instruction. Further, the system provides gaze-driven feedback, which could be either underspecified (“No, not that one!”) or contrastive (“Further left!”). As expected, our results show that ambiguous instructions followed by underspecified feedback are not beneficial for task performance, whereas contrastive feedback results in faster interactions. Interestingly, this approach even outperforms unambiguous instructions (manipulation between subjects). However, when the system alternates between underspecified and contrastive feedback to initially ambiguous descriptions in an interleaved manner (within subjects), task performance is similar for both approaches. This suggests that listeners engage more intensely with the system when they can expect it to be cooperative. This, rather than the actual informativity of the spoken feedback, may determine the efficiency of information uptake and performance.
Stichworte
Human–computer interaction; Natural language generation; Listener gaze; Referential success; Multimodal systems
Erscheinungsjahr
2018
Zeitschriftentitel
Cognitive Research: Principles and Implications
Band
3
Ausgabe
3
Art.-Nr.
51
ISSN
2365-7464
eISSN
2365-7464
Page URI
https://pub.uni-bielefeld.de/record/2932893

Zitieren

Mitev N, Renner P, Pfeiffer T, Staudte M. Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance. Cognitive Research: Principles and Implications. 2018;3(3): 51.
Mitev, N., Renner, P., Pfeiffer, T., & Staudte, M. (2018). Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance. Cognitive Research: Principles and Implications, 3(3), 51. doi:10.1186/s41235-018-0148-x
Mitev, Nikolina, Renner, Patrick, Pfeiffer, Thies, and Staudte, Maria. 2018. “Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance”. Cognitive Research: Principles and Implications 3 (3): 51.
Mitev, N., Renner, P., Pfeiffer, T., and Staudte, M. (2018). Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance. Cognitive Research: Principles and Implications 3:51.
Mitev, N., et al., 2018. Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance. Cognitive Research: Principles and Implications, 3(3): 51.
N. Mitev, et al., “Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance”, Cognitive Research: Principles and Implications, vol. 3, 2018, : 51.
Mitev, N., Renner, P., Pfeiffer, T., Staudte, M.: Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance. Cognitive Research: Principles and Implications. 3, : 51 (2018).
Mitev, Nikolina, Renner, Patrick, Pfeiffer, Thies, and Staudte, Maria. “Towards efficient human–machine collaboration. Effects of gaze-driven feedback and engagement on performance”. Cognitive Research: Principles and Implications 3.3 (2018): 51.

Link(s) zu Volltext(en)
Access Level
OA Open Access

32 References

Daten bereitgestellt von Europe PubMed Central.


AUTHOR UNKNOWN, 0

AUTHOR UNKNOWN, 0

AUTHOR UNKNOWN, 0
Beyond common and privileged: Gradient representations of common ground in real-time language use
Brown-Schmidt S.., 2012
Experimental methods: Between-subject and within-subject design
Charness G., Gneezy U., Kuhn M.., 2012
Using language. Cambridge: Cambridge University Press, Pp. xi 432
Clark H.., 1996

AUTHOR UNKNOWN, 0
Performance in a Collaborative Search Task: The Role of Feedback and Alignment.
Coco MI, Dale R, Keller F., Top Cogn Sci 10(1), 2017
PMID: 29131516
Eye movements as a window into real-time spoken language comprehension in natural contexts.
Eberhard KM, Spivey-Knowlton MJ, Sedivy JC, Tanenhaus MK., J Psycholinguist Res 24(6), 1995
PMID: 8531168
Embodied collaborative referring expression generation in situated human–robot interaction
Fang R., Doering M., Chai J.., 2015
The Effects of Social Gaze in Human-Robot Collaborative Assembly
Fischer K.., 2015
Exploiting Listener Gaze to Improve Situated Communication in Dynamic Virtual Environments.
Garoufi K, Staudte M, Koller A, Crocker MW., Cogn Sci 40(7), 2015
PMID: 26471391
Speakers’ eye gaze disambiguates referring expressions early during face-to-face conversation
Hanna J., Brennan S.., 2007
Physical relation and expression: Joint attention for human–robot interaction
Imai M., Ono T., Ishiguro H.., 2003
Turn it this way: Grounding collaborative action with remote gestures
Kirk D., Rodden T., Fraser D.., 2007

AUTHOR UNKNOWN, 0

Koller A., Staudte M., Garoufi K., Crocker M.., 2012
Max—A multimodal assistant in virtual reality construction
Kopp S., Jung B., Leßmann N., Wachsmuth I., 2003
Gaze and speech in attentive user interfaces
Maglio P., Matlock T., Campbell C., Zhai S., Smith B.., 2000
Politeness in interaction design
Pemberton L.., 2011

AUTHOR UNKNOWN, 0

AUTHOR UNKNOWN, 0
EyeSee3D: a low-cost approach for analyzing mobile 3D eye tracking data using computer vision and augmented reality technology
Pfeiffer T., Renner P.., 2014

AUTHOR UNKNOWN, 0

AUTHOR UNKNOWN, 0

AUTHOR UNKNOWN, 0

AUTHOR UNKNOWN, 0

AUTHOR UNKNOWN, 0

AUTHOR UNKNOWN, 0
Referring in installments: a corpus study of spoken object references in an interactive virtual environment
Striegnitz K., Buschmeier H., Kopp S.., 2012
Integration of visual and linguistic information in spoken language comprehension.
Tanenhaus MK, Spivey-Knowlton MJ, Eberhard KM, Sedivy JC., Science 268(5217), 1995
PMID: 7777863
Easy things first: Installments improve referring expression generation for objects in photographs
Zarrieß S., Schlangen D.., 2016
Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Web of Science

Dieser Datensatz im Web of Science®
Quellen

PMID: 30594976
PubMed | Europe PMC

Suchen in

Google Scholar