Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior
Stange S (2022)
Bielefeld: Universität Bielefeld.
Bielefelder E-Dissertation | Englisch
Download
sstange_thesis_XAR_2022.pdf
9.24 MB
Autor*in
Gutachter*in / Betreuer*in
Abstract / Bemerkung
Social robots’ capabilities are advancing and so is their deployment in numerous
domains - as language tutors, workout partners or social companions. An increase
in social robots’ abilities to autonomously choose behaviors inevitably augments the likelihood that these behaviors deviate from what a user might expect and approve of. One useful tool to increase transparency and enhance trust and acceptance in human-robot interaction are behavior explanations. Yet, sufficient models of how to enable self-explanations of autonomous social robot behavior in order to adequately inform users and prevent potential negative effects are missing.
This thesis investigated effects of providing human-inspired self-explanations for
robot behavior in social human-robot interaction with differing content (Paper A)
and at different times (Paper B). Based on these findings, a dialogic model of a
social robot’s self-explanations was developed, implemented as part of an interaction architecture and evaluated in a user study (Paper C). More concretely: a first study (A) aimed at investigating what effects a social robot’s self explanations produce on user’s perception of the behavior and how these effects differ as a function of different explanation content. An explanation type model was developed based on humans’ behavior explanations and conceptually grounded in the robot’s behavior generation process. Results demonstrated that verbal self-explanations could increase understandability and desirability of robot behaviors. Positive effects were higher for causally structured than simpler explanations and varied for different types of behavior. Evaluation of (B) the timing of a robot’s self-explanations surprisingly revealed negative effects of explaining undesirable behavior before, as compared to after execution of the behavior. These contextual differences highlight the importance of considering the socio-interactive context for deciding when to give what kind of explanation. The gained insights were transferred to an interaction setting (C): a dialogue-based, socio-interactive model for behavior explanations was proposed and requirements for explainable architectures for social robots were postulated. The explanation model was implemented as part of an explainable interaction architecture and tested in an acquaintance scenario, demonstrating both, successful behavior and explanation generation.
Overall, positive effects of an autonomous robot’s self-explanations were shown to
vary as a function of explanandum desirability and explanation content and timing, and explanation generation based on the proposed socio-interactive framework enabled the robot to autonomously and coherently self-explain its behavior.
Jahr
2022
Seite(n)
126
Urheberrecht / Lizenzen
Page URI
https://pub.uni-bielefeld.de/record/2967737
Zitieren
Stange S. Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior. Bielefeld: Universität Bielefeld; 2022.
Stange, S. (2022). Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior. Bielefeld: Universität Bielefeld. https://doi.org/10.4119/unibi/2967737
Stange, Sonja. 2022. Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior. Bielefeld: Universität Bielefeld.
Stange, S. (2022). Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior. Bielefeld: Universität Bielefeld.
Stange, S., 2022. Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior, Bielefeld: Universität Bielefeld.
S. Stange, Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior, Bielefeld: Universität Bielefeld, 2022.
Stange, S.: Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior. Universität Bielefeld, Bielefeld (2022).
Stange, Sonja. Tell Me Why (and What)! Self-Explanations for Autonomous Social Robot Behavior. Bielefeld: Universität Bielefeld, 2022.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 4.0 International Public License (CC BY-SA 4.0):
Volltext(e)
Name
sstange_thesis_XAR_2022.pdf
9.24 MB
Access Level
Open Access
Zuletzt Hochgeladen
2023-01-10T14:43:02Z
MD5 Prüfsumme
a80bd6e420eb1db7ffc4e7efea3b367e
Material in PUB:
Teil dieser Dissertation
Effects of a Social Robot's Self-Explanations on How Humans Understand and Evaluate Its Behavior
Stange S, Kopp S (2020)
In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. New York, NY: ACM: 619–627.
Stange S, Kopp S (2020)
In: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. New York, NY: ACM: 619–627.
Teil dieser Dissertation
Self-Explaining Social Robots: An Explainable Behavior Generation Architecture for Human-Robot Interaction
Stange S, Hassan T, Schröder F, Konkol J, Kopp S (2022)
Frontiers in Artificial Intelligence 5: 866920.
Stange S, Hassan T, Schröder F, Konkol J, Kopp S (2022)
Frontiers in Artificial Intelligence 5: 866920.
Teil dieser Dissertation
Explaining Before or After Acting? How the Timing of Self-Explanations Affects User Perception of Robot Behavior
Stange S, Kopp S (2021)
In: Social Robotics. 13th International Conference, ICSR 2021, Singapore, Singapore, November 10–13, 2021, Proceedings. Li H, Ge SS, Wu Y, Wykowska A, He H, Liu X, Li D, Perez-Osorio J (Eds); Lecture Notes in Computer Science , 13086. Cham: Springer: 142-153.
Stange S, Kopp S (2021)
In: Social Robotics. 13th International Conference, ICSR 2021, Singapore, Singapore, November 10–13, 2021, Proceedings. Li H, Ge SS, Wu Y, Wykowska A, He H, Liu X, Li D, Perez-Osorio J (Eds); Lecture Notes in Computer Science , 13086. Cham: Springer: 142-153.