What should AI see? Using the public’s opinion to determine the perception of an AI
Chan RK-W, Dardashti R, Osinski M, Rottmann M, Brüggemann D, Rücker C, Schlicht P, Hüger F, Rummel N, Gottschalk H (2023)
AI and Ethics 3(4): 1381–1405.
Zeitschriftenaufsatz
| Veröffentlicht | Englisch
Download
s43681-022-00248-3.pdf
2.25 MB
Autor*in
Chan, Robin Kien-WeiUniBi ;
Dardashti, Radin;
Osinski, Meike;
Rottmann, Matthias;
Brüggemann, Dominik;
Rücker, Cilia;
Schlicht, Peter;
Hüger, Fabian;
Rummel, Nikol;
Gottschalk, Hanno
Einrichtung
Abstract / Bemerkung
Deep neural networks (DNN) have made impressive progress in the interpretation of image data so that it is conceivable and to some degree realistic to use them in safety critical applications like automated driving. From an ethical standpoint, the AI algorithm should take into account the vulnerability of objects or subjects on the street that ranges from “not at all”, e.g. the road itself, to “high vulnerability” of pedestrians. One way to take this into account is to define the cost of confusion of one semantic category with another and use cost-based decision rules for the interpretation of probabilities, which are the output of DNNs. However, it is an open problem how to define the cost structure, who should be in charge to do that, and thereby define what AI-algorithms will actually “see”. As one possible answer, we follow a participatory approach and set up an online survey to ask the public to define the cost structure. We present the survey design and the data acquired along with an evaluation that also distinguishes between perspective (car passenger vs. external traffic participant) and gender. Using simulation basedF-tests, we find highly significant differences between the groups. These differences have consequences on the reliable detection of pedestrians in a safety critical distance to the self-driving car. We discuss the ethical problems that are related to this approach and also discuss the problems emerging from human–machine interaction through the survey from a psychological point of view. Finally, we include comments from industry leaders in the field of AI safety on the applicability of survey based elements in the design of AI functionalities in automated driving.
Erscheinungsjahr
2023
Zeitschriftentitel
AI and Ethics
Band
3
Ausgabe
4
Seite(n)
1381–1405
Urheberrecht / Lizenzen
ISSN
2730-5953
eISSN
2730-5961
Finanzierungs-Informationen
Open-Access-Publikationskosten wurden durch die Universität Bielefeld im Rahmen des DEAL-Vertrags gefördert.
Page URI
https://pub.uni-bielefeld.de/record/2968966
Zitieren
Chan RK-W, Dardashti R, Osinski M, et al. What should AI see? Using the public’s opinion to determine the perception of an AI. AI and Ethics. 2023;3(4):1381–1405.
Chan, R. K. - W., Dardashti, R., Osinski, M., Rottmann, M., Brüggemann, D., Rücker, C., Schlicht, P., et al. (2023). What should AI see? Using the public’s opinion to determine the perception of an AI. AI and Ethics, 3(4), 1381–1405. https://doi.org/10.1007/s43681-022-00248-3
Chan, Robin Kien-Wei, Dardashti, Radin, Osinski, Meike, Rottmann, Matthias, Brüggemann, Dominik, Rücker, Cilia, Schlicht, Peter, Hüger, Fabian, Rummel, Nikol, and Gottschalk, Hanno. 2023. “What should AI see? Using the public’s opinion to determine the perception of an AI”. AI and Ethics 3 (4): 1381–1405.
Chan, R. K. - W., Dardashti, R., Osinski, M., Rottmann, M., Brüggemann, D., Rücker, C., Schlicht, P., Hüger, F., Rummel, N., and Gottschalk, H. (2023). What should AI see? Using the public’s opinion to determine the perception of an AI. AI and Ethics 3, 1381–1405.
Chan, R.K.-W., et al., 2023. What should AI see? Using the public’s opinion to determine the perception of an AI. AI and Ethics, 3(4), p 1381–1405.
R.K.-W. Chan, et al., “What should AI see? Using the public’s opinion to determine the perception of an AI”, AI and Ethics, vol. 3, 2023, pp. 1381–1405.
Chan, R.K.-W., Dardashti, R., Osinski, M., Rottmann, M., Brüggemann, D., Rücker, C., Schlicht, P., Hüger, F., Rummel, N., Gottschalk, H.: What should AI see? Using the public’s opinion to determine the perception of an AI. AI and Ethics. 3, 1381–1405 (2023).
Chan, Robin Kien-Wei, Dardashti, Radin, Osinski, Meike, Rottmann, Matthias, Brüggemann, Dominik, Rücker, Cilia, Schlicht, Peter, Hüger, Fabian, Rummel, Nikol, and Gottschalk, Hanno. “What should AI see? Using the public’s opinion to determine the perception of an AI”. AI and Ethics 3.4 (2023): 1381–1405.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Creative Commons Namensnennung 4.0 International Public License (CC-BY 4.0):
Volltext(e)
Name
s43681-022-00248-3.pdf
2.25 MB
Access Level
Open Access
Zuletzt Hochgeladen
2024-07-09T06:56:05Z
MD5 Prüfsumme
a1dc77e5b02bb0cb597887536fa14105
Link(s) zu Volltext(en)
Access Level
Open Access
Export
Markieren/ Markierung löschen
Markierte Publikationen
Quellen
arXiv: 2206.04776
Preprint: 10.48550/ARXIV.2206.04776
Suchen in