Increased morality through social communication or decision situation worsens the acceptance of robo-advisors
Arlinghaus CS, Straßmann C, Dix A (2024)
OSF Preprints.
Preprint | Englisch
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Arlinghaus, Clarissa SabrinaUniBi ;
Straßmann, Carolin;
Dix, Annika
Abstract / Bemerkung
This German study (N = 317) tests social communication (i.e., self-disclosure, content intimacy, relational continuity units, we-phrases) as a potential compensation strategy for algorithm aversion. Therefore, we explore the acceptance of a robot as an advisor in non-moral, somewhat moral, and very moral decision situations and compare the influence of two verbal communication styles of the robot (functional vs. social). Subjects followed the robot's recommendation similarly often for both communication styles (functional vs. social), but more often in the non-moral decision situation than in the moral decision situations. Subjects perceived the robot as more human and more moral during social communication than during functional communication but similarly trustworthy, likable, and intelligent for both communication styles. In moral decision situations, subjects ascribed more anthropomorphism and morality but less trust, likability, and intelligence to the robot compared to the non-moral decision situation. Subjects perceive the robot as more moral in social communication. This unexpectedly led to subjects following the robot's recommendation less often. No other mediation effects were found. From this we conclude, that the verbal communication style alone has a rather small influence on the robot's acceptance as an advisor for moral decision-making and does not reduce algorithm aversion. Potential reasons for this (e.g., multimodality, no visual changes), as well as implications (e.g., avoidance of self-disclosure in human-robot interaction) and limitations (e.g., video interaction) of this study, are discussed.
Erscheinungsjahr
2024
Zeitschriftentitel
OSF Preprints
Page URI
https://pub.uni-bielefeld.de/record/2993481
Zitieren
Arlinghaus CS, Straßmann C, Dix A. Increased morality through social communication or decision situation worsens the acceptance of robo-advisors. OSF Preprints. 2024.
Arlinghaus, C. S., Straßmann, C., & Dix, A. (2024). Increased morality through social communication or decision situation worsens the acceptance of robo-advisors. OSF Preprints. https://doi.org/10.31219/osf.io/bufjh
Arlinghaus, Clarissa Sabrina, Straßmann, Carolin, and Dix, Annika. 2024. “Increased morality through social communication or decision situation worsens the acceptance of robo-advisors”. OSF Preprints.
Arlinghaus, C. S., Straßmann, C., and Dix, A. (2024). Increased morality through social communication or decision situation worsens the acceptance of robo-advisors. OSF Preprints.
Arlinghaus, C.S., Straßmann, C., & Dix, A., 2024. Increased morality through social communication or decision situation worsens the acceptance of robo-advisors. OSF Preprints.
C.S. Arlinghaus, C. Straßmann, and A. Dix, “Increased morality through social communication or decision situation worsens the acceptance of robo-advisors”, OSF Preprints, 2024.
Arlinghaus, C.S., Straßmann, C., Dix, A.: Increased morality through social communication or decision situation worsens the acceptance of robo-advisors. OSF Preprints. (2024).
Arlinghaus, Clarissa Sabrina, Straßmann, Carolin, and Dix, Annika. “Increased morality through social communication or decision situation worsens the acceptance of robo-advisors”. OSF Preprints (2024).