ChatGPT in medical school: how successful is AI in progress testing?
Friederichs H, Friederichs WJ, März M (2023)
Medical Education Online 28(1).
Zeitschriftenaufsatz
| Veröffentlicht | Englisch
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Friederichs, HendrikUniBi ;
Friederichs, Wolf Jonas;
März, Maren
Abstract / Bemerkung
Background
As generative artificial intelligence (AI), ChatGPT provides easy access to a wide range of information, including factual knowledge in the field of medicine. Given that knowledge acquisition is a basic determinant of physicians’ performance, teaching and testing different levels of medical knowledge is a central task of medical schools. To measure the factual knowledge level of the ChatGPT responses, we compared the performance of ChatGPT with that of medical students in a progress test.
Methods
A total of 400 multiple-choice questions (MCQs) from the progress test in German-speaking countries were entered into ChatGPT’s user interface to obtain the percentage of correctly answered questions. We calculated the correlations of the correctness of ChatGPT responses with behavior in terms of response time, word count, and difficulty of a progress test question.
Results
Of the 395 responses evaluated, 65.5% of the progress test questions answered by ChatGPT were correct. On average, ChatGPT required 22.8 s (SD 17.5) for a complete response, containing 36.2 (SD 28.1) words. There was no correlation between the time used and word count with the accuracy of the ChatGPT response (correlation coefficient for time rho = −0.08, 95% CI [−0.18, 0.02], t(393) = −1.55, p = 0.121; for word count rho = −0.03, 95% CI [−0.13, 0.07], t(393) = −0.54, p = 0.592). There was a significant correlation between the difficulty index of the MCQs and the accuracy of the ChatGPT response (correlation coefficient for difficulty: rho = 0.16, 95% CI [0.06, 0.25], t(393) = 3.19, p = 0.002).
Conclusion
ChatGPT was able to correctly answer two-thirds of all MCQs at the German state licensing exam level in Progress Test Medicine and outperformed almost all medical students in years 1–3. The ChatGPT answers can be compared with the performance of medical students in the second half of their studies.
Erscheinungsjahr
2023
Zeitschriftentitel
Medical Education Online
Band
28
Ausgabe
1
eISSN
1087-2981
Page URI
https://pub.uni-bielefeld.de/record/2979894
Zitieren
Friederichs H, Friederichs WJ, März M. ChatGPT in medical school: how successful is AI in progress testing? Medical Education Online. 2023;28(1).
Friederichs, H., Friederichs, W. J., & März, M. (2023). ChatGPT in medical school: how successful is AI in progress testing? Medical Education Online, 28(1). https://doi.org/10.1080/10872981.2023.2220920
Friederichs, Hendrik, Friederichs, Wolf Jonas, and März, Maren. 2023. “ChatGPT in medical school: how successful is AI in progress testing?”. Medical Education Online 28 (1).
Friederichs, H., Friederichs, W. J., and März, M. (2023). ChatGPT in medical school: how successful is AI in progress testing? Medical Education Online 28.
Friederichs, H., Friederichs, W.J., & März, M., 2023. ChatGPT in medical school: how successful is AI in progress testing? Medical Education Online, 28(1).
H. Friederichs, W.J. Friederichs, and M. März, “ChatGPT in medical school: how successful is AI in progress testing?”, Medical Education Online, vol. 28, 2023.
Friederichs, H., Friederichs, W.J., März, M.: ChatGPT in medical school: how successful is AI in progress testing? Medical Education Online. 28, (2023).
Friederichs, Hendrik, Friederichs, Wolf Jonas, and März, Maren. “ChatGPT in medical school: how successful is AI in progress testing?”. Medical Education Online 28.1 (2023).
Link(s) zu Volltext(en)
Access Level
Open Access
Daten bereitgestellt von European Bioinformatics Institute (EBI)
Zitationen in Europe PMC
Daten bereitgestellt von Europe PubMed Central.
References
Daten bereitgestellt von Europe PubMed Central.
Export
Markieren/ Markierung löschen
Markierte Publikationen
Web of Science
Dieser Datensatz im Web of Science®Quellen
PMID: 37307503
PubMed | Europe PMC
Suchen in