ChatGPT versus expert feedback on clinical reasoning questions and their effect on learning: a randomized controlled trial

被引:3
作者
Cicek, Feray Ekin [1 ]
Ulker, Muserref [1 ]
Ozer, Menekse [1 ]
Kiyak, Yavuz Selim [2 ]
机构
[1] Gazi Univ, Fac Med, TR-06500 Ankara, Turkiye
[2] Gazi Univ, Fac Med, Dept Med Educ & Informat, TR-06500 Ankara, Turkiye
关键词
ChatGPT; large language models; artificial intelligence; feedback; clinical reasoning;
D O I
10.1093/postmj/qgae170
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Purpose To evaluate the effectiveness of ChatGPT-generated feedback compared to expert-written feedback in improving clinical reasoning skills among first-year medical students. Methods This is a randomized controlled trial conducted at a single medical school and involved 129 first-year medical students who were randomly assigned to two groups. Both groups completed three formative tests with feedback on urinary tract infections (UTIs; uncomplicated, complicated, pyelonephritis) over five consecutive days as a spaced repetition, receiving either expert-written feedback (control, n = 65) or ChatGPT-generated feedback (experiment, n = 64). Clinical reasoning skills were assessed using Key-Features Questions (KFQs) immediately after the intervention and 10 days later. Students' critical approach to artificial intelligence (AI) was also measured before and after disclosing the AI involvement in feedback generation. Results There was no significant difference between the mean scores of the control (immediate: 78.5 +/- 20.6 delayed: 78.0 +/- 21.2) and experiment (immediate: 74.7 +/- 15.1, delayed: 76.0 +/- 14.5) groups in overall performance on Key-Features Questions (out of 120 points) immediately (P = .26) or after 10 days (P = .57), with small effect sizes. However, the control group outperformed the ChatGPT group in complicated urinary tract infection cases (P < .001). The experiment group showed a significantly higher critical approach to AI after disclosing, with medium-large effect sizes. Conclusions ChatGPT-generated feedback can be an effective alternative to expert feedback in improving clinical reasoning skills in medical students, particularly in resource-constrained settings with limited expert availability. However, AI-generated feedback may lack the nuance needed for more complex cases, emphasizing the need for expert review. Additionally, exposure to the drawbacks in AI-generated feedback can enhance students' critical approach towards AI-generated educational content. Key Messages What is already known on this topic Text-based virtual patients with feedback have shown effectiveness in improving clinical reasoning, and recent advances in generative artificial intelligence (AI), such as ChatGPT, have proposed new ways to provide feedback in medical education. However, the effect of AI-generated feedback has not been compared to expert-written feedback. What this study adds While the effect of ChatGPT feedback was generally on par with the effect of expert feedback, the study identified limitations in AI-generated explanations for more nuanced diagnosis and treatment. How this study might affect research, practice, or policy The findings suggest that ChatGPT can be utilized as a supplementary tool especially in resource-limited settings where expert feedback is not readily available. Its integration could streamline feedback and improve educational efficiency, but a hybrid approach is recommended to ensure accuracy, with educators reviewing AI-generated feedback.
引用
收藏
页码:458 / 463
页数:6
相关论文
共 28 条
  • [1] The key-features approach to assess clinical decisions: validity evidence to date
    Bordage, G.
    Page, G.
    [J]. ADVANCES IN HEALTH SCIENCES EDUCATION, 2018, 23 (05) : 1005 - 1036
  • [2] The integrated curriculum in medical education: AMEE Guide No. 96
    Brauer, David G.
    Ferguson, Kristi J.
    [J]. MEDICAL TEACHER, 2015, 37 (04) : 312 - 322
  • [3] Creating virtual patients using large language models: scalable, global, and low cost
    Cook, David A.
    [J]. MEDICAL TEACHER, 2025, 47 (01) : 40 - 42
  • [4] Creswell J.W., 2012, EDUC RES-UK, V4
  • [5] Limitations of large language models in medical applications
    Deng, Jiawen
    Zubair, Areeba
    Park, Ye-Jean
    [J]. POSTGRADUATE MEDICAL JOURNAL, 2023, 99 (1178) : 1298 - 1299
  • [6] Downing SM., 2009, Assessment in Health Professions Education, DOI [10.1080/00981380902765212, DOI 10.1080/00981380902765212]
  • [7] AI-based avatars are changing the way we learn and teach: benefits and challenges
    Fink, Maximilian C.
    Robinson, Seth A.
    Ertl, Bernhard
    [J]. FRONTIERS IN EDUCATION, 2024, 9
  • [8] A scoping review of artificial intelligence in medical education: BEME Guide No. 84
    Gordon, Morris
    Daniel, Michelle
    Ajiboye, Aderonke
    Uraiby, Hussein
    Xu, Nicole Y.
    Bartlett, Rangana
    Hanson, Janice
    Haas, Mary
    Spadafore, Maxwell
    Gasiea, Rayhan Yousef
    Grafton-Clarke, Ciaran
    Michie, Colin
    Corral, Janet
    Kwan, Brian
    Dolmans, Diana
    Thammasitboon, Satid
    [J]. MEDICAL TEACHER, 2024, 46 (04) : 446 - 470
  • [9] A Language Model-Powered Simulated Patient With AutomatedFeedback for History Taking:Prospective Study
    Holderried, Friederike
    Stegemann-Philipps, Christian
    Herrmann-Werner, Anne
    Festl-Wietek, Teresa
    Holderried, Martin
    Eickhoff, Carsten
    Mahling, Moritz
    [J]. JMIR MEDICAL EDUCATION, 2024, 10
  • [10] Assessing clinical reasoning: moving from in vitro to in vivo
    Holmboe, Eric S.
    Durning, Steven J.
    [J]. DIAGNOSIS, 2014, 1 (01) : 111 - 117