Ghost-in-the-Machine reveals human social signals for human–robot interaction

Loth S, Jettka K, Giuliani M, de Ruiter J (2015)
Frontiers in Psychology 6: 1641.

Download
OA
Journal Article | Original Article | Published | English
Abstract
We used a new method called “Ghost-in-the-Machine” (GiM) to investigate social interactions with a robotic bartender taking orders for drinks and serving them. Using the GiM paradigm allowed us to identify how human participants recognize the intentions of customers on the basis of the output of the robotic recognizers. Specifically, we measured which recognizer modalities (e.g., speech, the distance to the bar) were relevant at different stages of the interaction. This provided insights into human social behavior necessary for the development of socially competent robots. When initiating the drink-order interaction, the most important recognizers were those based on computer vision. When drink orders were being placed, however, the most important information source was the speech recognition. Interestingly, the participants used only a subset of the available information, focussing only on a few relevant recognizers while ignoring others. This reduced the risk of acting on erroneous sensor data and enabled them to complete service interactions more swiftly than a robot using all available sensor data. We also investigated socially appropriate response strategies. In their responses, the participants preferred to use the same modality as the customer’s requests, e.g., they tended to respond verbally to verbal requests. Also, they added redundancy to their responses, for instance by using echo questions. We argue that incorporating the social strategies discovered with the GiM paradigm in multimodal grammars of human–robot interactions improves the robustness and the ease-of-use of these interactions, and therefore provides a smoother user experience.
Publishing Year
ISSN
Financial disclosure
Article Processing Charge funded by the Deutsche Forschungsgemeinschaft and the Open Access Publication Fund of Bielefeld University.
PUB-ID

Cite this

Loth S, Jettka K, Giuliani M, de Ruiter J. Ghost-in-the-Machine reveals human social signals for human–robot interaction. Frontiers in Psychology. 2015;6: 1641.
Loth, S., Jettka, K., Giuliani, M., & de Ruiter, J. (2015). Ghost-in-the-Machine reveals human social signals for human–robot interaction. Frontiers in Psychology, 6, 1641. doi:10.3389/fpsyg.2015.01641
Loth, S., Jettka, K., Giuliani, M., and de Ruiter, J. (2015). Ghost-in-the-Machine reveals human social signals for human–robot interaction. Frontiers in Psychology 6:1641.
Loth, S., et al., 2015. Ghost-in-the-Machine reveals human social signals for human–robot interaction. Frontiers in Psychology, 6: 1641.
S. Loth, et al., “Ghost-in-the-Machine reveals human social signals for human–robot interaction”, Frontiers in Psychology, vol. 6, 2015, : 1641.
Loth, S., Jettka, K., Giuliani, M., de Ruiter, J.: Ghost-in-the-Machine reveals human social signals for human–robot interaction. Frontiers in Psychology. 6, : 1641 (2015).
Loth, Sebastian, Jettka, Katharina, Giuliani, Manuel, and de Ruiter, Jan. “Ghost-in-the-Machine reveals human social signals for human–robot interaction”. Frontiers in Psychology 6 (2015): 1641.
All files available under the following license(s):
Copyright Statement:
This Item is protected by copyright and/or related rights. [...]
Main File(s)
Access Level
OA Open Access
Last Uploaded
2016-05-31T12:08:47Z

This data publication is cited in the following publications:
This publication cites the following data publications:

86 References

Data provided by Europe PubMed Central.

The preference for self-correction in the organization of repair in conversation.
Schegloff E., Jefferson G., Sacks H.., 1977
Opening up closings.
Schegloff E., Sacks H.., 1973
Foveal and peripheral fields of vision influences perceptual skill in anticipating opponents' attacking position in volleyball.
Schorer J, Rienhoff R, Fischer L, Baker J., Appl Psychophysiol Biofeedback 38(3), 2013
PMID: 23775537
Real-time human pose recognition in parts from single depth images.
Shotton J., Sharp T., Kipman A., Fitzgibbon A., Finocchio M., Blake A.., 2013
Engagement rules for human-robot collaborative interactions
Sidner C., Lee C.., 2003
Explorations in engagement for humans and robots.
Sidner C., Lee C., Kidd C., Lesh N., Rich C.., 2005
Gorillas in our midst: sustained inattentional blindness for dynamic events.
Simons DJ, Chabris CF., Perception 28(9), 1999
PMID: 10694957
Bridging the gap between social animal and unsocial machine: a survey of social signal processing.
Vinciarelli A., Pantic M., Heylen D., Pelachaud C., Poggi I., D’Errico F.., 2012
Designing games with a purpose.
von L., Dabbish L.., 2008
Active adaptation in human-agent collaborative interaction.
Xu Y., Ohmoto Y., Ueda K., Komatsu T., Okadome T., Kamei K.., 2010
Development of a mobile museum guide robot that can configure spatial formation with visitors
Yousuf A., Kobayashi Y., Yamazaki A., Yamazaki K.., 2012

Export

0 Marked Publications

Open Data PUB

Web of Science

View record in Web of Science®

Sources

PMID: 26582998
PubMed | Europe PMC

Search this title in

Google Scholar