The Performance of ChatGPT on the American Society for Surgery of the Hand Self-Assessment Examination

被引:3
作者
Arango, Sebastian D. [1 ]
Flynn, Jason C. [2 ]
Zeitlin, Jacob [1 ]
Wilson, Matthew S. [1 ]
Strohl, Adam B. [1 ]
Weiss, Lawrence E. [3 ]
Weir, Tristan B. [1 ]
机构
[1] Philadelphia Hand Shoulder Ctr, Dept Orthopaed Surg, Philadelphia, PA 19107 USA
[2] Sidney Kimmel Coll Med, Dept Orthopaed Surg, Philadelphia, PA USA
[3] OAA Orthopaed Specialists, Div Orthopaed Hand Surg, Allentown, PA USA
关键词
self-assessment examination; hand; chatgpt; assh; artificial intelligence;
D O I
10.7759/cureus.58950
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background: This study aims to compare the performance of ChatGPT-3.5 (GPT-3.5) and ChatGPT-4 (GPT-4) on the American Society for Surgery of the Hand (ASSH) Self-Assessment Examination (SAE) to determine their potential as educational tools. Methods: This study assessed the proportion of correct answers to text -based questions on the 2021 and 2022 ASSH SAE between untrained ChatGPT versions. Secondary analyses assessed the performance of ChatGPT based on question difficulty and question category. The outcomes of ChatGPT were compared with the performance of actual examinees on the ASSH SAE. Results: A total of 238 questions were included in the analysis. Compared with GPT-3.5, GPT-4 provided significantly more correct answers overall (58.0% versus 68.9%, respectively ; P = 0.013), on the 2022 SAE (55.9% versus 72.9%; P = 0.007), and more difficult questions (48.8% versus 63.6%; P = 0.02). In a multivariable logistic regression analysis, correct answers were predicted by GPT-4 (odds ratio [OR], 1.66; P = 0.011), increased question difficulty (OR, 0.59; P = 0.009), Bone and Joint questions (OR, 0.18; P < 0.001), and Soft Tissue questions (OR, 0.30; P = 0.013). Actual examinees scored a mean of 21.6% above GPT-3.5 and 10.7% above GPT-4. The mean percentage of correct answers by actual examinees was significantly higher for correct (versus incorrect) ChatGPT answers. Conclusions: GPT-4 demonstrated improved performance over GPT-3.5 on the ASSH SAE, especially on more difficult questions. Actual examinees scored higher than both versions of ChatGPT, but the margin was cut in half by GPT-4.
引用
收藏
页数:11
相关论文
共 29 条
  • [1] [Anonymous], 2023, ChatGPT
  • [2] [Anonymous], 2023, Maintenance of Certification
  • [3] [Anonymous], 2023, Continuous Certification Program
  • [4] ASSH Self-Assessment Examination, 2023, About us
  • [5] Bharat C, 2021, LANCET DIGIT HEALTH, V3, pE397, DOI 10.1016/S2589-7500(21)00058-3
  • [6] BUCKWALTER JA, 1981, J MED EDUC, V56, P115
  • [7] A Machine Learning Algorithm to Estimate the Probability of a True Scaphoid Fracture After Wrist Trauma
    Bulstra, Anne Eva J.
    [J]. JOURNAL OF HAND SURGERY-AMERICAN VOLUME, 2022, 47 (08): : E14 - 718
  • [8] Evaluation of Online Artificial Intelligence-Generated Information on Common Hand Procedures
    Crook, Bryan S.
    Park, Caroline N.
    Hurley, Eoghan T.
    Richard, Marc J.
    Pidgeon, Tyler S.
    [J]. JOURNAL OF HAND SURGERY-AMERICAN VOLUME, 2023, 48 (11): : 1122 - 1127
  • [9] Do Orthopaedic In-Training Examination Scores Predict the Likelihood of Passing the American Board of Orthopaedic Surgery Part I Examination? An Update With 2014 to 2018 Data
    Fritz, Erik
    Bednar, Michael
    Harrast, John
    Marsh, J. Lawrence
    Martin, David
    Swanson, David
    Tornetta, Paul
    Van Heest, Ann
    [J]. JOURNAL OF THE AMERICAN ACADEMY OF ORTHOPAEDIC SURGEONS, 2021, 29 (24) : E1370 - E1377
  • [10] ChatGPT Earns American Board Certification in Hand Surgery
    Ghanem, Diane
    Nassar, Joseph E.
    El Bachour, Joseph
    Hanna, Tammam
    [J]. HAND SURGERY & REHABILITATION, 2024, 43 (03)