Automatic detection of service initiation signals used in bars

Loth S, Huth K, de Ruiter J (2013)
Frontiers in Psychology 4: 557.

Zeitschriftenaufsatz | Veröffentlicht | Englisch
Abstract / Bemerkung
Recognizing the intention of others is important in all social interactions, especially in the service domain. Enabling a bartending robot to serve customers is particularly challenging as the system has to recognize the social signals produced by customers and respond appropriately. Detecting whether a customer would like to order is essential for the service encounter to succeed. This detection is particularly challenging in a noisy environment with multiple customers. Thus, a bartending robot has to be able to distinguish between customers intending to order, chatting with friends or just passing by. In order to study which signals customers use to initiate a service interaction in a bar, we recorded real-life customer-staff interactions in several German bars. These recordings were used to generate initial hypotheses about the signals customers produce when bidding for the attention of bar staff. Two experiments using snapshots and short video sequences then tested the validity of these hypothesized candidate signals. The results revealed that bar staff responded to a set of two non-verbal signals: first, customers position themselves directly at the bar counter and, secondly, they look at a member of staff. Both signals were necessary and, when occurring together, sufficient. The participants also showed a strong agreement about when these cues occurred in the videos. Finally, a signal detection analysis revealed that ignoring a potential order is deemed worse than erroneously inviting customers to order. We conclude that (a) these two easily recognizable actions are sufficient for recognizing the intention of customers to initiate a service interaction, but other actions such as gestures and speech were not necessary, and (b) the use of reaction time experiments using natural materials is feasible and provides ecologically valid results.
human robot interaction; social signal processing; intention recognition; social robotics; social signaling; action recognition
Frontiers in Psychology
Page URI


Loth S, Huth K, de Ruiter J. Automatic detection of service initiation signals used in bars. Frontiers in Psychology. 2013;4: 557.
Loth, S., Huth, K., & de Ruiter, J. (2013). Automatic detection of service initiation signals used in bars. Frontiers in Psychology, 4, 557. doi:10.3389/fpsyg.2013.00557
Loth, S., Huth, K., and de Ruiter, J. (2013). Automatic detection of service initiation signals used in bars. Frontiers in Psychology 4:557.
Loth, S., Huth, K., & de Ruiter, J., 2013. Automatic detection of service initiation signals used in bars. Frontiers in Psychology, 4: 557.
S. Loth, K. Huth, and J. de Ruiter, “Automatic detection of service initiation signals used in bars”, Frontiers in Psychology, vol. 4, 2013, : 557.
Loth, S., Huth, K., de Ruiter, J.: Automatic detection of service initiation signals used in bars. Frontiers in Psychology. 4, : 557 (2013).
Loth, Sebastian, Huth, Kerstin, and de Ruiter, Jan. “Automatic detection of service initiation signals used in bars”. Frontiers in Psychology 4 (2013): 557.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Copyright Statement:
Dieses Objekt ist durch das Urheberrecht und/oder verwandte Schutzrechte geschützt. [...]
Access Level
OA Open Access
Zuletzt Hochgeladen
MD5 Prüfsumme

5 Zitationen in Europe PMC

Daten bereitgestellt von Europe PubMed Central.

Confidence in uncertainty: Error cost and commitment in early speech hypotheses.
Loth S, Jettka K, Giuliani M, Kopp S, de Ruiter JP., PLoS One 13(8), 2018
PMID: 30067853
Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.
Giuliani M, Mirnig N, Stollnberger G, Stadler S, Buchner R, Tscheligi M., Front Psychol 6(), 2015
PMID: 26217266
Ghost-in-the-Machine reveals human social signals for human-robot interaction.
Loth S, Jettka K, Giuliani M, de Ruiter JP., Front Psychol 6(), 2015
PMID: 26582998

57 References

Daten bereitgestellt von Europe PubMed Central.

Mixed-effects modeling with crossed random effects for subjects and items
Baayen R., Davidson D., Bates D.., 2008
Visual tracking of hands, faces and facial features of multiple persons
Baltzakis H., Pateraki M., Trahanias P.., 2012
Fitting linear mixed models in R
Bates D.., 2005

Bates D., Sarkar D.., 2007
Actions as space-time shapes.
Gorelick L, Blank M, Shechtman E, Irani M, Basri R., IEEE Trans Pattern Anal Mach Intell 29(12), 2007
PMID: 17934233
Models for multiparty engagement in open-world dialog
Bohus D., Horvitz E.., 2009
Open-world dialog: challenges, directions, and prototype
Bohus D., Horvitz E.., 2009
Learning to predict engagement with a spoken dialog system in open-world settings
Bohus D., Horvitz E.., 2009
Dialog in the open world: platform and applications
Bohus D., Horvitz E.., 2009
On the challenges and opportunities of physically situated dialog
Bohus D., Horvitz E.., 2010
Multiparty turn taking in situated dialog: study, lessons, and directions
Bohus D., Horvitz E.., 2011

Brysbaert M.., 2007

Cohen J.., 1969

Cramér H.., 1946
Projecting the end of a speaker's turn: a cognitive cornerstone of conversation
De J., Mitterer H., Enfield N.., 2006
G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences.
Faul F, Erdfelder E, Lang AG, Buchner A., Behav Res Methods 39(2), 2007
PMID: 17695343
DMDX: a windows display program with millisecond accuracy.
Forster KI, Forster JC., Behav Res Methods Instrum Comput 35(1), 2003
PMID: 12723786
Two people walk into a bar
Foster M., Gaschler A., Giuliani M., Isard A., Pateraki M., Petrick R.., 2012

Goffman E.., 1963
Bilingualism affects picture naming but not picture classification.
Gollan TH, Montoya RI, Fennema-Notestine C, Morris SK., Mem Cognit 33(7), 2005
PMID: 16532855
Actions as space-time shapes.
Gorelick L, Blank M, Shechtman E, Irani M, Basri R., IEEE Trans Pattern Anal Mach Intell 29(12), 2007
PMID: 17934233

Hall E.., 1969

Holroyd A., Ponsler B., Koakiettaveechai P.., 2009
Generating connection events for human-robot collaboration
Holroyd A., Rich C., Sidner C., Ponsler B.., 2011
Robot behavior toolkit
Huang C.-M., Mutlu B.., 2012
Communicating with multiple users for embodied conversational agents in quiz game context
Huang H., Furukawa T., Ohashi H., Nishida T., Cerekovic A., Pandzic I.., 2010
Grasping the intentions of others with one's own mirror neuron system.
Iacoboni M, Molnar-Szakacs I, Gallese V, Buccino G, Mazziotta JC, Rizzolatti G., PLoS Biol. 3(3), 2005
PMID: 15736981

Given that, should I respond? Contectual addressee estimation in multi-party human-robot interactions
Jayagopi D., Odobez J.-M.., 2013
Representations for actions
Jeannerod M.., 2006
Actions or hand-object interactions? Human inferior frontal cortex and action observation.
Johnson-Frey SH, Maloof FR, Newman-Norlund R, Farrer C, Inati S, Grafton ST., Neuron 39(6), 2003
PMID: 12971903
Predictive coding: an account of the mirror neuron system.
Kilner JM, Friston KJ, Frith CD., Cogn Process 8(3), 2007
PMID: 17429704
Engagement-based multi-party dialog with a humanoid robot
Klotz D., Wienke J., Peltason J., Wrede B., Wrede S., Khalidov V.., 2011
Interactional biases in human thinking
Levinson S.., 1995

MacKay D.., 2003
A spatial model of engagement for a social robot
Michalowski M., Sabanovic S., Simmons R.., 2006
The restaurant game: learning social behavior and language from thousands of players online
Orkin J., Roy D.., 2007
Automatic learning and generation of social behaviour from collective human gameplay
Orkin J., Roy D.., 2009
The perception of face gender: the role of stimulus structure in recognition and classification.
O'Toole AJ, Deffenbacher KA, Valentin D, McKee K, Huff D, Abdi H., Mem Cognit 26(1), 1998
PMID: 9519705
Direction of attention perception for conversation initiation in virtual environments
Peters C.., 2005
A model of attention and interest using gaze behavior
Peters C., Pelachaud C., Bevacqua E., Mancini M., Poggi I.., 2005
What would you like to drink? Recognising and planning with social states in a robot bartender domain
Petrick R., Foster M.., 2012
A survey on vision-based human action recognition
Poppe R.., 2010

Recognizing engagement in human-robot interaction
Rich C., Ponsler B., Holroyd A., Sidner C.., 2010
Opening up closings
Schegloff E., Sacks H.., 1973
A mathematical theory of communication
Shannon C.., 1948
Prediction and entropy of printed English
Shannon C.., 1951
Real-time human pose recognition in parts from single depth images
Shotton J., Sharp T., Kipman A., Fitzgibbon A., Finocchio M., Blake A.., 2013
Engagement rules for human-robot collaborative interactions
Sidner C., Lee C.., 2003
Explorations in engagement for humans and robots
Sidner C., Lee C., Kidd C., Lesh N., Rich C.., 2005
The role of visual similarity in picture categorization.
Snodgrass JG, McCullough B., J Exp Psychol Learn Mem Cogn 12(1), 1986
PMID: 2949047

ELAN: a professional framework for multimodality research
Wittenburg P., Brugman H., Russel A., Klassmann A., Sloetjes H.., 2006
Slow feature analysis for human action recognition.
Zhang Z, Tao D., IEEE Trans Pattern Anal Mach Intell 34(3), 2012
PMID: 21808089


Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Web of Science

Dieser Datensatz im Web of Science®


PMID: 24009594
PubMed | Europe PMC

Suchen in

Google Scholar