Performance of ChatGPT in Ophthalmic Registration and ClinicalDiagnosis:Cross-Sectional Study

被引:0
作者
Ming, Shuai [1 ,2 ]
Guo, Xiaohong
Guo, Qingge [1 ,2 ]
Xie, Kunpeng [1 ]
Chen, Dandan [1 ,2 ]
Lei, Bo [1 ,2 ,3 ]
机构
[1] Henan Eye Hosp, Henan Prov Peoples Hosp, Henan Eye Inst, Dept Ophthalmol, 7 Weiwu Rd, Zhengzhou, Peoples R China
[2] Henan Acad Innovat Med Sci, Eye Inst, Zhengzhou, Peoples R China
[3] Zhengzhou Univ, Peoples Hosp, Henan Clin Res Ctr Ocular Dis, Zhengzhou, Peoples R China
关键词
artificial intelligence; chatbot; ChatGPT; ophthalmic registration; clinical diagnosis; AI; cross-sectional study; eye disease; eyedisorder; ophthalmology; health care; outpatient registration; clinical; decision-making; generative AI; vision impairment; ARTIFICIAL-INTELLIGENCE;
D O I
10.2196/60226
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Artificial intelligence (AI) chatbots such as ChatGPT are expected to impact vision health care significantly.Their potential to optimize the consultation process and diagnostic capabilities across range of ophthalmic subspecialties haveyet to be fully explored.Objective: This study aims to investigate the performance of AI chatbots in recommending ophthalmic outpatient registrationand diagnosing eye diseases within clinical case profiles.Methods: This cross-sectional study used clinical cases from Chinese Standardized Resident Training-Ophthalmology (2ndEdition). For each case, 2 profiles were created: patient with history (Hx) and patient with history and examination (Hx+Ex).These profiles served as independent queries for GPT-3.5 and GPT-4.0 (accessed from March 5 to 18, 2024). Similarly, 3ophthalmic residents were posed the same profiles in a questionnaire format. The accuracy of recommending ophthalmicsubspecialty registration was primarily evaluated using Hx profiles. The accuracy of the top-ranked diagnosis and the accuracyof the diagnosis within the top 3 suggestions (do-not-miss diagnosis) were assessed using Hx+Ex profiles. The gold standard forjudgment was the published, official diagnosis. Characteristics of incorrect diagnoses by ChatGPT were also analyzed.Results: A total of 208 clinical profiles from 12 ophthalmic subspecialties were analyzed (104 Hx and 104 Hx+Ex profiles).For Hx profiles, GPT-3.5, GPT-4.0, and residents showed comparable accuracy in registration suggestions (66/104, 63.5%;81/104, 77.9%; and 72/104, 69.2%, respectively; P=.07), with ocular trauma, retinal diseases, and strabismus and amblyopiaachieving the top 3 accuracies. For Hx+Ex profiles, both GPT-4.0 and residents demonstrated higher diagnostic accuracy thanGPT-3.5 (62/104, 59.6% and 63/104, 60.6% vs 41/104, 39.4%; P=.003 and P=.001, respectively). Accuracy for do-not-missdiagnoses also improved (79/104, 76% and 68/104, 65.4% vs 51/104, 49%; P<.001 and P=.02, respectively). The highest diagnosticaccuracies were observed in glaucoma; lens diseases; and eyelid, lacrimal, and orbital diseases. GPT-4.0 recorded fewer incorrecttop-3 diagnoses (25/42, 60% vs 53/63, 84%; P=.005) and more partially correct diagnoses (21/42, 50% vs 7/63 11%; P<.001)than GPT-3.5, while GPT-3.5 had more completely incorrect (27/63, 43% vs 7/42, 17%; P=.005) and less precise diagnoses(22/63, 35% vs 5/42, 12%; P=.009).Conclusions: GPT-3.5 and GPT-4.0 showed intermediate performance in recommending ophthalmic subspecialties for registration.While GPT-3.5 underperformed, GPT-4.0 approached and numerically surpassed residents in differential diagnosis. AI chatbotsshow promise in facilitating ophthalmic patient registration. However, their integration into diagnostic decision-making requiresmore validation
引用
收藏
页数:14
相关论文
共 50 条
  • [21] A cross-sectional study on ChatGPT’s alignment with clinical practice guidelines in musculoskeletal rehabilitation
    Ertuğrul Safran
    Sefa Yildirim
    BMC Musculoskeletal Disorders, 26 (1)
  • [22] A cross-sectional comparative study: ChatGPT 3.5 versus diverse levels of medical experts in the diagnosis of ENT diseases
    Makhoul, Mikhael
    Melkane, Antoine E.
    El Khoury, Patrick
    El Hadi, Christopher
    Matar, Nayla
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (5) : 2717 - 2721
  • [23] A cross-sectional comparative study: ChatGPT 3.5 versus diverse levels of medical experts in the diagnosis of ENT diseases
    Mikhael Makhoul
    Antoine E. Melkane
    Patrick El Khoury
    Christopher El Hadi
    Nayla Matar
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 2717 - 2721
  • [24] A comparison of the responses between ChatGPT and doctors in the field of cholelithiasis based on clinical practice guidelines: a cross-sectional study
    Mao, Tianyang
    Zhao, Xin
    Jiang, Kangyi
    Xie, Qingyun
    Yang, Manyu
    Wang, Ruoxuan
    Gao, Fengwei
    DIGITAL HEALTH, 2025, 11
  • [25] Performance of ChatGPT-3.5 and ChatGPT-4 in the Taiwan National Pharmacist Licensing Examination: Comparative Evaluation Study
    Wang, Ying-Mei
    Shen, Hung-Wei
    Chen, Tzeng-Ji
    Chiang, Shu-Chiung
    Lin, Ting-Guan
    JMIR MEDICAL EDUCATION, 2025, 11
  • [26] ChatGPT in pharmacy practice: a cross-sectional exploration of Jordanian pharmacists' perception, practice, and concerns
    Khawla Abu Hammour
    Hamza Alhamad
    Fahmi Y. Al-Ashwal
    Abdulsalam Halboup
    Rana Abu Farha
    Adnan Abu Hammour
    Journal of Pharmaceutical Policy and Practice, 16
  • [27] Attitude and utilization of ChatGPT among registered nurses: A cross-sectional study
    Lin, Hui-Ling
    Liao, Li-Ling
    Wang, Ya-Ni
    Chang, Li-Chun
    INTERNATIONAL NURSING REVIEW, 2025, 72 (02)
  • [28] Amplifying Chinese physicians' emphasis on patients' psychological states beyond urologic diagnoses with ChatGPT - a multicenter cross-sectional study
    Peng, Lei
    Liang, Rui
    Zhao, Anguo
    Sun, Ruonan
    Yi, Fulin
    Zhong, Jianye
    Li, Rongkang
    Zhu, Shimao
    Zhang, Shaohua
    Wu, Song
    INTERNATIONAL JOURNAL OF SURGERY, 2024, 110 (10) : 6501 - 6508
  • [29] Development and evaluation of multimodal AI for diagnosis and triage of ophthalmic diseases using ChatGPT and anterior segment images: protocol for a two-stage cross-sectional study
    Peng, Zhiyu
    Ma, Ruiqi
    Zhang, Yihan
    Yan, Mingxu
    Lu, Jie
    Cheng, Qian
    Liao, Jingjing
    Zhang, Yunqiu
    Wang, Jinghan
    Zhao, Yue
    Zhu, Jiang
    Qin, Bing
    Jiang, Qin
    Shi, Fei
    Qian, Jiang
    Chen, Xinjian
    Zhao, Chen
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [30] Exploring the potential of ChatGPT for clinical reasoning and decision-making: a cross-sectional study on the Italian Medical Residency Exam
    Scaioli, Giacomo
    Lo Moro, Giuseppina
    Conrado, Francesco
    Rosset, Lorenzo
    Bert, Fabrizio
    Siliquini, Roberta
    ANNALI DELL ISTITUTO SUPERIORE DI SANITA, 2023, 59 (04): : 267 - 270