Performance of ChatGPT in Ophthalmic Registration and ClinicalDiagnosis:Cross-Sectional Study

被引:0
作者
Ming, Shuai [1 ,2 ]
Guo, Xiaohong
Guo, Qingge [1 ,2 ]
Xie, Kunpeng [1 ]
Chen, Dandan [1 ,2 ]
Lei, Bo [1 ,2 ,3 ]
机构
[1] Henan Eye Hosp, Henan Prov Peoples Hosp, Henan Eye Inst, Dept Ophthalmol, 7 Weiwu Rd, Zhengzhou, Peoples R China
[2] Henan Acad Innovat Med Sci, Eye Inst, Zhengzhou, Peoples R China
[3] Zhengzhou Univ, Peoples Hosp, Henan Clin Res Ctr Ocular Dis, Zhengzhou, Peoples R China
关键词
artificial intelligence; chatbot; ChatGPT; ophthalmic registration; clinical diagnosis; AI; cross-sectional study; eye disease; eyedisorder; ophthalmology; health care; outpatient registration; clinical; decision-making; generative AI; vision impairment; ARTIFICIAL-INTELLIGENCE;
D O I
10.2196/60226
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Artificial intelligence (AI) chatbots such as ChatGPT are expected to impact vision health care significantly.Their potential to optimize the consultation process and diagnostic capabilities across range of ophthalmic subspecialties haveyet to be fully explored.Objective: This study aims to investigate the performance of AI chatbots in recommending ophthalmic outpatient registrationand diagnosing eye diseases within clinical case profiles.Methods: This cross-sectional study used clinical cases from Chinese Standardized Resident Training-Ophthalmology (2ndEdition). For each case, 2 profiles were created: patient with history (Hx) and patient with history and examination (Hx+Ex).These profiles served as independent queries for GPT-3.5 and GPT-4.0 (accessed from March 5 to 18, 2024). Similarly, 3ophthalmic residents were posed the same profiles in a questionnaire format. The accuracy of recommending ophthalmicsubspecialty registration was primarily evaluated using Hx profiles. The accuracy of the top-ranked diagnosis and the accuracyof the diagnosis within the top 3 suggestions (do-not-miss diagnosis) were assessed using Hx+Ex profiles. The gold standard forjudgment was the published, official diagnosis. Characteristics of incorrect diagnoses by ChatGPT were also analyzed.Results: A total of 208 clinical profiles from 12 ophthalmic subspecialties were analyzed (104 Hx and 104 Hx+Ex profiles).For Hx profiles, GPT-3.5, GPT-4.0, and residents showed comparable accuracy in registration suggestions (66/104, 63.5%;81/104, 77.9%; and 72/104, 69.2%, respectively; P=.07), with ocular trauma, retinal diseases, and strabismus and amblyopiaachieving the top 3 accuracies. For Hx+Ex profiles, both GPT-4.0 and residents demonstrated higher diagnostic accuracy thanGPT-3.5 (62/104, 59.6% and 63/104, 60.6% vs 41/104, 39.4%; P=.003 and P=.001, respectively). Accuracy for do-not-missdiagnoses also improved (79/104, 76% and 68/104, 65.4% vs 51/104, 49%; P<.001 and P=.02, respectively). The highest diagnosticaccuracies were observed in glaucoma; lens diseases; and eyelid, lacrimal, and orbital diseases. GPT-4.0 recorded fewer incorrecttop-3 diagnoses (25/42, 60% vs 53/63, 84%; P=.005) and more partially correct diagnoses (21/42, 50% vs 7/63 11%; P<.001)than GPT-3.5, while GPT-3.5 had more completely incorrect (27/63, 43% vs 7/42, 17%; P=.005) and less precise diagnoses(22/63, 35% vs 5/42, 12%; P=.009).Conclusions: GPT-3.5 and GPT-4.0 showed intermediate performance in recommending ophthalmic subspecialties for registration.While GPT-3.5 underperformed, GPT-4.0 approached and numerically surpassed residents in differential diagnosis. AI chatbotsshow promise in facilitating ophthalmic patient registration. However, their integration into diagnostic decision-making requiresmore validation
引用
收藏
页数:14
相关论文
共 50 条
  • [31] Exploring the potential of ChatGPT for clinical reasoning and decision-making: a cross-sectional study on the Italian Medical Residency Exam
    Scaioli, Giacomo
    Lo Moro, Giuseppina
    Conrado, Francesco
    Rosset, Lorenzo
    Bert, Fabrizio
    Siliquini, Roberta
    ANNALI DELL ISTITUTO SUPERIORE DI SANITA, 2023, 59 (04): : 267 - 270
  • [32] Medical students' patterns of using ChatGPT as a feedback tool and perceptions of ChatGPT in a Leadership and Communication course in Korea: a cross-sectional study
    Park, Janghee
    JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS, 2023, 20
  • [33] Comparing ChatGPT and clinical nurses' performances on tracheostomy care: A cross-sectional study
    Wang, Tongyao
    Mu, Juan
    Chen, Jialing
    Lin, Chia-Chin
    INTERNATIONAL JOURNAL OF NURSING STUDIES ADVANCES, 2024, 6
  • [34] Evaluating the agreement between ChatGPT-4 and validated questionnaires in screening for anxiety and depression in college students: a cross-sectional study
    Jiali Liu
    Juan Gu
    Mengjie Tong
    Yake Yue
    Yufei Qiu
    Lijuan Zeng
    Yiqing Yu
    Fen Yang
    Shuyan Zhao
    BMC Psychiatry, 25 (1)
  • [35] Factors Associated With the Accuracy of Large Language Models in Basic Medical Science Examinations: Cross-Sectional Study
    Kaewboonlert, Naritsaret
    Poontananggul, Jiraphon
    Pongsuwan, Natthipong
    Bhakdisongkhram, Gun
    JMIR MEDICAL EDUCATION, 2025, 11
  • [36] Emergency department triaging using ChatGPT based on emergency severity index principles: a cross-sectional study
    Colakca, Cansu
    Ergin, Mehmet
    Ozensoy, Habibe Selmin
    Sener, Alp
    Guru, Selahattin
    Ozhasenekler, Ayhan
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [37] Assessing Familiarity, Usage Patterns, and Attitudes of Medical Students Toward ChatGPT and Other Chat- Based AI Apps in Medical Education: Cross-Sectional Questionnaire Study
    Elhassan, Safia Elwaleed
    Sajid, Muhammad Raihan
    Syed, Amina Mariam
    Fathima, Sidrah Afreen
    Khan, Bushra Shehroz
    Tamim, Hala
    JMIR MEDICAL EDUCATION, 2025, 11
  • [38] Association of Primary Care Access with Health-Related ChatGPT Use: A National Cross-Sectional Survey
    Ayo-Ajibola, Oluwatobiloba
    Julien, Catherine
    Lin, Matthew E.
    Riddell, Jeffrey
    Duan, Naihua
    Kravitz, Richard L.
    JOURNAL OF GENERAL INTERNAL MEDICINE, 2025,
  • [39] Patient Perspectives on AI for Mental Health Care: Cross-Sectional Survey Study
    Benda, Natalie
    Desai, Pooja
    Reza, Zayan
    Zheng, Anna
    Kumar, Shiveen
    Harkins, Sarah
    Hermann, Alison
    Zhang, Yiye
    Joly, Rochelle
    Kim, Jessica
    Pathak, Jyotishman
    Turchioe, Meghan Reading
    JMIR MENTAL HEALTH, 2024, 11
  • [40] Clinical Accuracy of Large Language Models and Google Search Responses to Postpartum Depression Questions: Cross-Sectional Study
    Sezgin, Emre
    Chekeni, Faraaz
    Lee, Jennifer
    Keim, Sarah
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2023, 25