Assessing Ability for ChatGPT to Answer Total Knee Arthroplasty-Related Questions

被引:18
作者
Magruder, Matthew L. [1 ]
Rodriguez, Ariel N. [1 ]
Wong, Jason C. J. [1 ]
Erez, Orry [1 ]
Piuzzi, Nicolas S. [2 ]
Scuderi, Gil R. [3 ]
Slover, James D. [3 ]
Oh, Jason H. [3 ]
Schwarzkopf, Ran [4 ]
Chen, Antonia F. [5 ]
Iorio, Richard [5 ]
Goodman, Stuart B. [6 ]
Mont, Michael A. [7 ]
机构
[1] Maimonides Hosp, Dept Orthopaed Surg, 927 49th St, Brooklyn, NY 11219 USA
[2] Cleveland Clin, Dept Orthopaed Surg, Cleveland, OH USA
[3] Lenox Hill Hosp, Northwell Orthopaed Inst, Dept Orthopaed Surg, New York, NY USA
[4] NYU Langone Hlth, Dept Orthopaed Surg, NYU Langone Orthoped, New York, NY USA
[5] Brigham & Womens Hosp, Dept Orthopaed Surg, Boston, MA USA
[6] Stanford Univ, Sch Med, Dept Orthopaed Surg, Redwood City, CA USA
[7] Sinai Hosp Baltimore, Rubin Inst Adv Orthoped, Baltimore, MD USA
关键词
ChatGPT; artificial intelligence; large language model; total knee arthroplasty; clinical practice guidelines; ARTIFICIAL-INTELLIGENCE; PERFORMANCE; CALL;
D O I
10.1016/j.arth.2024.02.023
中图分类号
R826.8 [整形外科学]; R782.2 [口腔颌面部整形外科学]; R726.2 [小儿整形外科学]; R62 [整形外科学(修复外科学)];
学科分类号
摘要
Background: Artificial intelligence in the field of orthopaedics has been a topic of increasing interest and opportunity in recent years. Its applications are widespread both for physicians and patients, including use in clinical decision-making, in the operating room, and in research. In this study, we aimed to assess the quality of ChatGPT answers when asked questions related to total knee arthroplasty. Methods: ChatGPT prompts were created by turning 15 of the American Academy of Orthopaedic Surgeons Clinical Practice Guidelines into questions. An online survey was created, which included screenshots of each prompt and answers to the 15 questions. Surgeons were asked to grade ChatGPT answers from 1 to 5 based on their characteristics: (1) relevance, (2) accuracy, (3) clarity, (4) completeness, (5) evidence-based, and (6) consistency. There were 11 Adult Joint Reconstruction fellowship-trained surgeons who completed the survey. Questions were subclassified based on the subject of the prompt: (1) risk factors, (2) implant/intraoperative, and (3) pain/functional outcomes. The average and standard deviation for all answers, as well as for each subgroup, were calculated. Inter-rater reliability (IRR) was also calculated. Results: All answer characteristics were graded as being above average (ie, a score > 3). Relevance demonstrated the highest scores (4.43 +/- 0.77) by surgeons surveyed, and consistency demonstrated the lowest scores (3.54 +/- 1.10). ChatGPT prompts in the Risk Factors group demonstrated the best responses, while those in the Pain/Functional Outcome group demonstrated the lowest. The overall IRR was found to be 0.33 (poor reliability), with the highest IRR for relevance (0.43) and the lowest for evidence-based (0.28). Conclusions: ChatGPT can answer questions regarding well-established clinical guidelines in total knee arthroplasty with above-average accuracy but demonstrates variable reliability. This investigation is the first step in understanding large language model artificial intelligence like ChatGPT and how well they perform in the field of arthroplasty. (c) 2024 Elsevier Inc. All rights reserved.
引用
收藏
页码:2022 / 2027
页数:6
相关论文
共 20 条
[1]   Using a Google Web Search Analysis to Assess the Utility of ChatGPT in Total Joint Arthroplasty [J].
Dubin, Jeremy A. ;
Bains, Sandeep S. ;
Chen, Zhongming ;
Hameed, Daniel ;
Nace, James ;
Mont, Michael A. ;
Delanois, Ronald E. .
JOURNAL OF ARTHROPLASTY, 2023, 38 (07) :1195-1202
[2]   How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment [J].
Gilson, Aidan ;
Safranek, Conrad W. ;
Huang, Thomas ;
Socrates, Vimig ;
Chi, Ling ;
Taylor, Richard Andrew ;
Chartash, David .
JMIR MEDICAL EDUCATION, 2023, 9
[3]  
Godin J, 2022, American Academy of orthopaedic surgeons surgical management of osteoarthritis of the knee evidence-based clinical practice guideline
[4]   A SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of ChatGPT in the Medical Literature: Concise Review [J].
Goedde, Daniel ;
Noehl, Sophia ;
Wolf, Carina ;
Rupert, Yannick ;
Rimkus, Lukas ;
Ehlers, Jan ;
Breuckmann, Frank ;
Sellmann, Timur .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2023, 25
[5]   Performance of ChatGPT on the Plastic Surgery Inservice Training Examination [J].
Gupta, Rohun ;
Herzog, Isabel ;
Park, John B. ;
Weisberger, Joseph ;
Firouzbakht, Peter ;
Ocon, Vanessa ;
Chao, John ;
Lee, Edward S. ;
Mailey, Brian A. .
AESTHETIC SURGERY JOURNAL, 2023, :NP1078-NP1082
[6]   RETRACTED: New Artificial Intelligence ChatGPT Performs Poorly on the 2022 Self-assessment Study Program for Urology (Retracted Article) [J].
Huynh, Linda My ;
Bonebrake, Benjamin T. ;
Schultis, Kaitlyn ;
Quach, Alan ;
Deibert, Christopher M. .
UROLOGY PRACTICE, 2023, 10 (04) :408-+
[7]  
Johnson Douglas, 2023, Res Sq, DOI 10.21203/rs.3.rs-2566942/v1
[8]   Potential benefits, unintended consequences, and future roles of artificial intelligence in orthopaedic surgery research A CALL TO EMPHASIZE DATA QUALITY AND INDICATIONS [J].
Kunze, K. N. ;
Orr, M. ;
Krebs, V ;
Bhandari, M. ;
Piuzzi, N. S. .
BONE & JOINT OPEN, 2022, 3 (01) :93-97
[9]   Can Artificial Intelligence Pass the American Board of Orthopaedic Surgery Examination? Orthopaedic Residents Versus ChatGPT [J].
Lum, Zachary C. .
CLINICAL ORTHOPAEDICS AND RELATED RESEARCH, 2023, 481 (08) :1623-1630
[10]   ChatGPT and Other Natural Language Processing Artificial Intelligence Models in Adult Reconstruction [J].
Magruder, Matthew L. ;
Delanois, Ronald E. ;
Nace, James ;
Mont, Michael A. .
JOURNAL OF ARTHROPLASTY, 2023, 38 (11) :2191-2192