Evaluation of responses to cardiac imaging questions by the artificial intelligence large language model ChatGPT

被引:7
作者
Monroe, Cynthia L. [1 ]
Abdelhafez, Yasser G. [2 ]
Atsina, Kwame [3 ]
Aman, Edris [3 ]
Nardo, Lorenzo [2 ]
Madani, Mohammad H. [2 ]
机构
[1] Calif Northstate Univ, Coll Med, 9700 W Taron Dr, Elk Grove, CA 95757 USA
[2] Univ Calif Davis, Med Ctr, Dept Radiol, 4860 Y St,Suite 3100, Sacramento, CA 95817 USA
[3] Univ Calif Davis, Med Ctr, Div Cardiovasc Med, 4860 Y St,Suite 0200, Sacramento, CA 95817 USA
关键词
Accuracy; Cardiac imaging; ChatGPT; Patient education; EXPERT CONSENSUS DOCUMENT; COMPUTED-TOMOGRAPHY SCCT; CORONARY-ARTERY-DISEASE; AMERICAN-COLLEGE; RADIOLOGY ACR; SOCIETY;
D O I
10.1016/j.clinimag.2024.110193
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Purpose: To assess ChatGPT 's ability as a resource for educating patients on various aspects of cardiac imaging, including diagnosis, imaging modalities, indications, interpretation of radiology reports, and management. Methods: 30 questions were posed to ChatGPT-3.5 and ChatGPT-4 three times in three separate chat sessions. Responses were scored as correct, incorrect, or clinically misleading categories by three observers -two board certified cardiologists and one board certified radiologist with cardiac imaging subspecialization. Consistency of responses across the three sessions was also evaluated. Final categorization was based on majority vote between at least two of the three observers. Results: ChatGPT-3.5 answered seventeen of twenty eight questions correctly (61 %) by majority vote. Twenty one of twenty eight questions were answered correctly (75 %) by ChatGPT-4 by majority vote. Majority vote for correctness was not achieved for two questions. Twenty six of thirty questions were answered consistently by ChatGPT-3.5 (87 %). Twenty nine of thirty questions were answered consistently by ChatGPT-4 (97 %). ChatGPT-3.5 had both consistent and correct responses to seventeen of twenty eight questions (61 %). ChatGPT-4 had both consistent and correct responses to twenty of twenty eight questions (71 %). Conclusion: ChatGPT-4 had overall better performance than ChatGTP-3.5 when answering cardiac imaging questions with regard to correctness and consistency of responses. While both ChatGPT-3.5 and ChatGPT-4 answers over half of cardiac imaging questions correctly, inaccurate, clinically misleading and inconsistent responses suggest the need for further refinement before its application for educating patients about cardiac imaging.
引用
收藏
页数:8
相关论文
共 29 条
  • [11] CAD-RADS™ Coronary Artery Disease - Reporting and Data System. An expert consensus document of the Society of Cardiovascular Computed Tomography (SCCT), the American College of Radiology (ACR) and the North American Society for Cardiovascular Imaging (NASCI). Endorsed by the American College of Cardiology
    Cury, Ricardo C.
    Abbara, Suhny
    Achenbach, Stephan
    Agatston, Arthur
    Berman, Daniel S.
    Budoff, Matthew J.
    Dill, Karin E.
    Jacobs, Jill E.
    Maroules, Christopher D.
    Rubin, Geoffrey D.
    Rybicki, Frank J.
    Schoepf, U. Joseph
    Shaw, Leslee J.
    Stillman, Arthur E.
    White, Charles S.
    Woodard, Pamela K.
    Leipsic, Jonathon A.
    [J]. JOURNAL OF CARDIOVASCULAR COMPUTED TOMOGRAPHY, 2016, 10 (04) : 269 - 281
  • [12] Dondi M., 2021, Integrated Non-Invasive Cardiovascular Imaging: A Guide for the Practitioner
  • [13] Emergency Department Patients With Chest Pain Writing Panel, 2016, J Am Coll Radiol, V13, pe1, DOI 10.1016/j.jacr.2015.07.007
  • [14] JCS/JS']JSCS 2020 Guideline on Diagnosis and Management of Cardiovascular Sequelae in Kawasaki Disease
    Fukazawa, Ryuji
    Kobayashi, Junjiro
    Ayusawa, Mamoru
    Hamada, Hiromichi
    Miura, Masaru
    Mitani, Yoshihide
    Tsuda, Etsuko
    Nakajima, Hiroyuki
    Matsuura, Hiroyuki
    Ikeda, Kazuyuki
    Nishigaki, Kazuhiko
    Suzuki, Hiroyuki
    Takahashi, Kei
    Suda, Kenji
    Kamiyama, Hiroshi
    Onouchi, Yoshihiro
    Kobayashi, Tohru
    Yokoi, Hiroyoshi
    Sakamoto, Kisaburo
    Ochi, Masami
    Kitamura, Soichiro
    Hamaoka, Kenji
    Senzaki, Hideaki
    Kimura, Takeshi
    [J]. CIRCULATION JOURNAL, 2020, 84 (08) : 1348 - 1407
  • [15] Gala Dhir, 2023, Int J Environ Res Public Health, V20, DOI 10.3390/ijerph20156438
  • [16] Gulati M., 2021, J Am Coll Cardiol, V78, pe187, DOI DOI 10.1016/J.JACC.2021.07.053
  • [17] Use of ChatGPT, GPT-4, and Bard to Improve Readability of ChatGPT's Answers to Common Questions About Lung Cancer and Lung Cancer Screening
    Haver, Hana L.
    Lin, Cheng Ting
    Sirajuddin, Arlene
    Yi, Paul H.
    Jeudy, Jean
    [J]. AMERICAN JOURNAL OF ROENTGENOLOGY, 2023, 221 (05) : 701 - 704
  • [18] Artificial intelligence chatbots will revolutionize how cancer patients access information: ChatGPT represents a paradigm-shift
    Hopkins, Ashley M.
    Logan, Jessica M.
    Kichenadasse, Ganessan
    Sorich, Michael J.
    [J]. JNCI CANCER SPECTRUM, 2023, 7 (02)
  • [19] Kung Tiffany H, 2023, PLOS Digit Health, V2, pe0000198, DOI 10.1371/journal.pdig.0000198
  • [20] Levine David M, 2023, medRxiv, DOI 10.1101/2023.01.30.23285067