ChatGPT Performs Worse on USMLE-Style Ethics Questions Compared to Medical Knowledge Questions

被引:0
|
作者
Danehy, Tessa [1 ]
Hecht, Jessica [1 ]
Kentis, Sabrina [1 ]
Schechter, Clyde B. [2 ]
Jariwala, Sunit P. [3 ]
机构
[1] Albert Einstein Coll Med, Montefiore Med Ctr, Bronx, NY 10461 USA
[2] Albert Einstein Coll Med, Dept Family & Social Med, Bronx, NY USA
[3] Albert Einstein Coll Med, Div Allergy Immunol, Montefiore Med Ctr, Bronx, NY USA
来源
APPLIED CLINICAL INFORMATICS | 2024年 / 15卷 / 05期
关键词
ChatGPT; large language model; artificial intelligence; medical education; USMLE; ethics;
D O I
10.1055/a-2405-0138
中图分类号
R-058 [];
学科分类号
摘要
Objectives The main objective of this study is to evaluate the ability of the Large Language Model Chat Generative Pre-Trained Transformer (ChatGPT) to accurately answer the United States Medical Licensing Examination (USMLE) board-style medical ethics questions compared to medical knowledge-based questions. This study has the additional objectives of comparing the overall accuracy of GPT-3.5 to GPT-4 and assessing the variability of responses given by each version. Methods Using AMBOSS, a third-party USMLE Step Exam test prep service, we selected one group of 27 medical ethics questions and a second group of 27 medical knowledge questions matched on question difficulty for medical students. We ran 30 trials asking these questions on GPT-3.5 and GPT-4 and recorded the output. A random-effects linear probability regression model evaluated accuracy and a Shannon entropy calculation evaluated response variation. Results Both versions of ChatGPT demonstrated worse performance on medical ethics questions compared to medical knowledge questions. GPT-4 performed 18% points ( p < 0.05) worse on medical ethics questions compared to medical knowledge questions and GPT-3.5 performed 7% points ( p = 0.41) worse. GPT-4 outperformed GPT-3.5 by 22% points ( p < 0.001) on medical ethics and 33% points ( p < 0.001) on medical knowledge. GPT-4 also exhibited an overall lower Shannon entropy for medical ethics and medical knowledge questions (0.21 and 0.11, respectively) than GPT-3.5 (0.59 and 0.55, respectively) which indicates lower variability in response. Conclusion Both versions of ChatGPT performed more poorly on medical ethics questions compared to medical knowledge questions. GPT-4 significantly outperformed GPT-3.5 on overall accuracy and exhibited a significantly lower response variability in answer choices. This underscores the need for ongoing assessment of ChatGPT versions for medical education.
引用
收藏
页码:1049 / 1055
页数:7
相关论文
共 42 条
  • [41] Is ChatGPT 'ready' to be a learning tool for medical undergraduates and will it perform equally in different subjects? Comparative study of ChatGPT performance in tutorial and case-based learning questions in physiology and biochemistry
    Luke, W. A. Nathasha V.
    Chong, Lee Seow
    Ban, Kenneth H.
    Wong, Amanda H.
    Xiong, Chen Zhi
    Shing, Lee Shuh
    Taneja, Reshma
    Samarasekera, Dujeepa D.
    Yap, Celestial T.
    MEDICAL TEACHER, 2024, 46 (11) : 1441 - 1447
  • [42] Seminary on palliative care and medical ethics: caregivers and the unknown soul. Clinical questions and theoretical approach to ethical palliative care
    Jacquemin, Dominique
    Cadore, Bruno
    Feldman, Eliane
    Mallet, Donatien
    Richard, Jean-Francois
    MEDECINE PALLIATIVE, 2006, 5 (01): : 33 - 47