ChatGPT-4 Assistance in Optimizing Emergency Department Radiology Referrals and Imaging Selection

被引:49
作者
Barash, Yiftach [1 ,2 ,5 ,6 ]
Klang, Eyal [1 ,2 ,4 ,5 ]
Konen, Eli [2 ,3 ]
Sorin, Vera [1 ,2 ,5 ]
机构
[1] Chaim Sheba Med Ctr, Dept Diagnost Imaging, Tel Hashomer, Israel
[2] Tel Aviv Univ, Sackler Sch Med, Tel Aviv, Israel
[3] Chaim Sheba Med Ctr, Dept Diagnost Imaging, Tel Hashomer, Israel
[4] Chaim Sheba Med Ctr, ARC, Sami Sagol AI Hub, Tel Hashomer, Israel
[5] Chaim Sheba Med Ctr, DeepVis Lab, Tel Hashomer, Israel
[6] Chaim Sheba Med Ctr, Dept Diagnost Imaging, Emek Haela St 1, IL-52621 Ramat Gan, Israel
关键词
Large language models; ChatGPT; radiology; referrals; AI; REQUEST;
D O I
10.1016/j.jacr.2023.06.009
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose: The quality of radiology referrals influences patient management and imaging interpretation by radiologists. The aim of this study was to evaluate ChatGPT-4 as a decision support tool for selecting imaging examinations and generating radiology referrals in the emergency department (ED).Methods: Five consecutive clinical notes from the ED were retrospectively extracted, for each of the following pathologies: pulmonary embolism, obstructing kidney stones, acute appendicitis, diverticulitis, small bowel obstruction, acute cholecystitis, acute hip fracture, and testicular torsion. A total of 40 cases were included. These notes were entered into ChatGPT-4, requesting recommendations on the most appropriate imaging examinations and protocols. The chatbot was also asked to generate radiology referrals. Two independent radiologists graded the referral on a scale ranging from 1 to 5 for clarity, clinical relevance, and differential diagnosis. The chatbot's imaging recommendations were compared with the ACR Appropriateness Criteria (AC) and with the examinations performed in the ED. Agreement between readers was assessed using linear weighted Cohen's k coefficient.Results: ChatGPT-4's imaging recommendations aligned with the ACR AC and ED examinations in all cases. Protocol discrepancies between ChatGPT and the ACR AC were observed in two cases (5%). ChatGPT-4-generated referrals received mean scores of 4.6 and 4.8 for clarity, 4.5 and 4.4 for clinical relevance, and 4.9 from both reviewers for differential diagnosis. Agreement between readers was moderate for clinical relevance and clarity and substantial for differential diagnosis grading.Conclusions: ChatGPT-4 has shown potential in aiding imaging study selection for select clinical cases. As a complementary tool, large language models may improve radiology referral quality. Radiologists should stay informed about this technology and be mindful of potential challenges and risks.
引用
收藏
页码:998 / 1003
页数:6
相关论文
共 18 条
[1]  
Brueck H, The newest version of ChatGPT passed the US Medical Licensing Exam with flying colors-and diagnosed a 1 in 100,000 condition in seconds
[2]   ACR Appropriateness Criteria® Suspected Small-Bowel Obstruction [J].
Chang, Kevin J. ;
Marin, Daniele ;
Kim, David H. ;
Fowler, Kathryn J. ;
Camacho, Marc A. ;
Cash, Brooks D. ;
Garcia, Evelyn M. ;
Hatten, Benjamin W. ;
Kambadakone, Avinash R. ;
Levy, Angela D. ;
Liu, Peter S. ;
Moreno, Courtney ;
Peterson, Christine M. ;
Pietryga, Jason A. ;
Siegel, Alan ;
Weinstein, Stefanie ;
Carucci, Laura R. .
JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2020, 17 (05) :S305-S314
[3]   ACR Appropriateness Criteria® Acute Onset Flank Pain-Suspicion of Stone Disease [J].
Coursey, Courtney A. ;
Casalino, David D. ;
Remer, Erick M. ;
Arellano, Ronald S. ;
Bishoff, Jay T. ;
Dighe, Manjiri ;
Fulgham, Pat ;
Goldfarb, Stanley ;
Israel, Gary M. ;
Lazarus, Elizabeth ;
Leyendecker, John R. ;
Majd, Massoud ;
Nikolaidis, Paul ;
Papanicolaou, Nicholas ;
Prasad, Srinivasa ;
Ramchandani, Parvati ;
Sheth, Sheila ;
Vikram, Raghunandan .
ULTRASOUND QUARTERLY, 2012, 28 (03) :227-233
[4]  
Dagan A, 2023, PLoS Digit Health, V2
[5]  
Expert Panel on Gastrointestinal Imaging:, 2019, J Am Coll Radiol, V16, pS235, DOI 10.1016/j.jacr.2019.02.013
[6]  
Expert Panel on Gastrointestinal Imaging:, 2019, J Am Coll Radiol, V16, pS141, DOI 10.1016/j.jacr.2019.02.015
[7]  
Expert Panel on Gastrointestinal Imaging:, 2018, J Am Coll Radiol, V15, pS373, DOI 10.1016/j.jacr.2018.09.033
[8]  
Expert Panel on Musculoskeletal Imaging:, 2019, J Am Coll Radiol, V16, pS18, DOI 10.1016/j.jacr.2019.02.028
[9]  
Expert Panel on Urological Imaging:, 2019, J Am Coll Radiol, V16, pS38, DOI 10.1016/j.jacr.2019.02.016
[10]   Artificial intelligence-based text generators in hepatology: ChatGPT is just the beginning [J].
Ge, Jin ;
Lai, Jennifer C. .
HEPATOLOGY COMMUNICATIONS, 2023, 7 (04)