Performance of ChatGPT in Ophthalmic Registration and ClinicalDiagnosis:Cross-Sectional Study

被引:0
作者
Ming, Shuai [1 ,2 ]
Guo, Xiaohong
Guo, Qingge [1 ,2 ]
Xie, Kunpeng [1 ]
Chen, Dandan [1 ,2 ]
Lei, Bo [1 ,2 ,3 ]
机构
[1] Henan Eye Hosp, Henan Prov Peoples Hosp, Henan Eye Inst, Dept Ophthalmol, 7 Weiwu Rd, Zhengzhou, Peoples R China
[2] Henan Acad Innovat Med Sci, Eye Inst, Zhengzhou, Peoples R China
[3] Zhengzhou Univ, Peoples Hosp, Henan Clin Res Ctr Ocular Dis, Zhengzhou, Peoples R China
关键词
artificial intelligence; chatbot; ChatGPT; ophthalmic registration; clinical diagnosis; AI; cross-sectional study; eye disease; eyedisorder; ophthalmology; health care; outpatient registration; clinical; decision-making; generative AI; vision impairment; ARTIFICIAL-INTELLIGENCE;
D O I
10.2196/60226
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Artificial intelligence (AI) chatbots such as ChatGPT are expected to impact vision health care significantly.Their potential to optimize the consultation process and diagnostic capabilities across range of ophthalmic subspecialties haveyet to be fully explored.Objective: This study aims to investigate the performance of AI chatbots in recommending ophthalmic outpatient registrationand diagnosing eye diseases within clinical case profiles.Methods: This cross-sectional study used clinical cases from Chinese Standardized Resident Training-Ophthalmology (2ndEdition). For each case, 2 profiles were created: patient with history (Hx) and patient with history and examination (Hx+Ex).These profiles served as independent queries for GPT-3.5 and GPT-4.0 (accessed from March 5 to 18, 2024). Similarly, 3ophthalmic residents were posed the same profiles in a questionnaire format. The accuracy of recommending ophthalmicsubspecialty registration was primarily evaluated using Hx profiles. The accuracy of the top-ranked diagnosis and the accuracyof the diagnosis within the top 3 suggestions (do-not-miss diagnosis) were assessed using Hx+Ex profiles. The gold standard forjudgment was the published, official diagnosis. Characteristics of incorrect diagnoses by ChatGPT were also analyzed.Results: A total of 208 clinical profiles from 12 ophthalmic subspecialties were analyzed (104 Hx and 104 Hx+Ex profiles).For Hx profiles, GPT-3.5, GPT-4.0, and residents showed comparable accuracy in registration suggestions (66/104, 63.5%;81/104, 77.9%; and 72/104, 69.2%, respectively; P=.07), with ocular trauma, retinal diseases, and strabismus and amblyopiaachieving the top 3 accuracies. For Hx+Ex profiles, both GPT-4.0 and residents demonstrated higher diagnostic accuracy thanGPT-3.5 (62/104, 59.6% and 63/104, 60.6% vs 41/104, 39.4%; P=.003 and P=.001, respectively). Accuracy for do-not-missdiagnoses also improved (79/104, 76% and 68/104, 65.4% vs 51/104, 49%; P<.001 and P=.02, respectively). The highest diagnosticaccuracies were observed in glaucoma; lens diseases; and eyelid, lacrimal, and orbital diseases. GPT-4.0 recorded fewer incorrecttop-3 diagnoses (25/42, 60% vs 53/63, 84%; P=.005) and more partially correct diagnoses (21/42, 50% vs 7/63 11%; P<.001)than GPT-3.5, while GPT-3.5 had more completely incorrect (27/63, 43% vs 7/42, 17%; P=.005) and less precise diagnoses(22/63, 35% vs 5/42, 12%; P=.009).Conclusions: GPT-3.5 and GPT-4.0 showed intermediate performance in recommending ophthalmic subspecialties for registration.While GPT-3.5 underperformed, GPT-4.0 approached and numerically surpassed residents in differential diagnosis. AI chatbotsshow promise in facilitating ophthalmic patient registration. However, their integration into diagnostic decision-making requiresmore validation
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Comparisons of Quality, Correctness, and Similarity Between ChatGPT-Generated and Human-Written Abstracts for Basic Research: Cross-Sectional Study
    Cheng, Shu-Li
    Tsai, Shih-Jen
    Bai, Ya-Mei
    Ko, Chih-Hung
    Hsu, Chih-Wei
    Yang, Fu-Chi
    Tsai, Chia-Kuang
    Tu, Yu-Kang
    Yang, Szu-Nian
    Tseng, Ping-Tao
    Hsu, Tien-Wei
    Liang, Chih-Sung
    Su, Kuan-Pin
    [J]. JOURNAL OF MEDICAL INTERNET RESEARCH, 2023, 25
  • [42] Assessing ChatGPT-4’s performance on the US prosthodontic exam: impact of fine-tuning and contextual prompting vs. base knowledge, a cross-sectional study
    Mahmood Dashti
    Farshad Khosraviani
    Tara Azimi
    Delband Hefzi
    Shohreh Ghasemi
    Amir Fahimipour
    Niusha Zare
    Zohaib Khurshid
    Syed Rashid Habib
    [J]. BMC Medical Education, 25 (1)
  • [43] ChatGPT (GPT-3.5) as an assistant tool in microbial pathogenesis studies in Sweden: a cross-sectional comparative study
    Hultgren, Catharina
    Lindkvist, Annica
    Ozenci, Volkan
    Curbo, Sophie
    [J]. JOURNAL OF EDUCATIONAL EVALUATION FOR HEALTH PROFESSIONS, 2023, 20
  • [44] Clinical Accuracy, Relevance, Clarity, and Emotional Sensitivityof Large Language Models to Surgical Patient Questions:Cross-Sectional Study
    Dagli, Mert Marcel
    Oettl, Felix Conrad
    Ujral, Jaskeerat
    Malhotra, Kashish
    Ghenbot, Yohannes
    Yoon, Jang W.
    Ozturk, Ali K.
    Welch, William C.
    [J]. JMIR FORMATIVE RESEARCH, 2024, 8
  • [45] Artificial intelligence chatbots in transfusion medicine: A cross-sectional study
    Srivastava, Prateek
    Tewari, Ashish
    Al-Riyami, Arwa Z.
    [J]. VOX SANGUINIS, 2025,
  • [46] Characterizing the Adoption and Experiences of Users of Artificial Intelligence-Generated Health Information in the United States: Cross-Sectional Questionnaire Study
    Ayo-Ajibola, Oluwatobiloba
    Davis, Ryan J.
    Lin, Matthew E.
    Riddel, Jeffrey
    Kravitz, Richard L.
    [J]. JOURNAL OF MEDICAL INTERNET RESEARCH, 2024, 26
  • [47] We Can Rely on ChatGPT as an Educational Tutor: A Cross-Sectional Study of its Performance, Accuracy, and Limitations in University Admission Tests
    Beltozar-Clemente, Saul
    Diaz-Vega, Enrique
    Tejeda-Navarrete, Raul
    Zapata-Paulini, Joselyn
    [J]. INTERNATIONAL JOURNAL OF ENGINEERING PEDAGOGY, 2024, 14 (01): : 50 - 60
  • [48] Assessing ChatGPT's Diagnostic Accuracy and Therapeutic Strategies in Oral Pathologies: A Cross-Sectional Study
    Uranbey, Oemer
    Ozbey, Furkan
    Kaygisiz, Omer
    Ayranci, Ferhat
    [J]. CUREUS JOURNAL OF MEDICAL SCIENCE, 2024, 16 (04)
  • [49] Registered Nurses' Attitudes Towards ChatGPT and Self-Directed Learning: A Cross-Sectional Study
    Chang, Li-Chun
    Wang, Ya-Ni
    Lin, Hui-Ling
    Liao, Li-Ling
    [J]. JOURNAL OF ADVANCED NURSING, 2024,
  • [50] Comparing ChatGPT and a Single Anesthesiologist's Responses to Common Patient Questions: An Exploratory Cross-Sectional Survey of a Panel of Anesthesiologists
    Kuo, Frederick H.
    Fierstein, Jamie L.
    Tudor, Brant H.
    Gray, Geoffrey M.
    Ahumada, Luis M.
    Watkins, Scott C.
    Rehman, Mohamed A.
    [J]. JOURNAL OF MEDICAL SYSTEMS, 2024, 48 (01)