Assessing ChatGPT's Diagnostic Accuracy and Therapeutic Strategies in Oral Pathologies: A Cross-Sectional Study

被引:2
|
作者
Uranbey, Oemer [1 ]
Ozbey, Furkan [2 ]
Kaygisiz, Omer [3 ]
Ayranci, Ferhat [1 ]
机构
[1] Ordu Univ, Oral & Maxillofacial Surg, Ordu, Turkiye
[2] Ordu Univ, Oral & Maxillofacial Radiol, Ordu, Turkiye
[3] Gaziantep Univ, Oral & Maxillofacial Surg, Gaziantep, Turkiye
关键词
oral surgery; large language model; oral pathologies; chatgpt; artificial intelligence in healthcare;
D O I
10.7759/cureus.58607
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background: The rapid adoption of artificial intelligence (AI) models in the medical field is due to their ability to collaborate with clinicians in the diagnosis and management of a wide range of conditions. This research assesses the diagnostic accuracy and therapeutic strategies of Chat Generative Pre -trained Transformer (ChatGPT) in comparison to dental professionals across 12 clinical cases. Methodology: ChatGPT 3.5 was queried for diagnoses and management plans for 12 retrospective cases. Physicians were tasked with rating the complexity of clinical scenarios and their agreement with the ChatGPT responses using a five -point Likert scale. Comparisons were made between the complexity of the cases and the accuracy of the diagnoses and treatment plans. Results: ChatGPT exhibited high accuracy in providing differential diagnoses and acceptable treatment plans. In a survey involving 30 attending physicians, scenarios were rated with an overall median difficulty level of 3, showing acceptable agreement with ChatGPT's differential diagnosis accuracy (overall median 4). Our study revealed lower diagnosis scores correlating with decreased treatment management scores, as demonstrated by univariate ordinal regression analysis. Conclusions: ChatGPT's rapid processing aids healthcare by offering an objective, evidence -based approach, reducing human error and workload. However, potential biases may affect outcomes and challenge lessexperienced practitioners. AI in healthcare, including ChatGPT, is still evolving, and further research is needed to understand its full potential in analyzing clinical information, establishing diagnoses, and suggesting treatments.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Exploring Jordanian medical students' perceptions and concerns about ChatGPT in medical education: a cross-sectional study
    Abu Hammour, Adnan
    Hammour, Khawla Abu
    Alhamad, Hamza
    Nassar, Razan
    El-Dahiyat, Faris
    Sawaqed, Majd
    Allan, Aya
    Manaseer, Qusai
    Abu Hammour, Mohammad
    Halboup, Abdulsalam
    Farha, Rana Abu
    JOURNAL OF PHARMACEUTICAL POLICY AND PRACTICE, 2024, 17 (01)
  • [22] Clinical Accuracy of Large Language Models and Google Search Responses to Postpartum Depression Questions: Cross-Sectional Study
    Sezgin, Emre
    Chekeni, Faraaz
    Lee, Jennifer
    Keim, Sarah
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2023, 25
  • [23] Assessing the Efficacy of Large Language Models in Health Literacy: A Comprehensive Cross-Sectional Study
    Amin, Kanhai S.
    Mayes, Linda C.
    Khosla, Pavan
    Doshi, Rushabh H.
    YALE JOURNAL OF BIOLOGY AND MEDICINE, 2024, 97 (01) : 17 - 27
  • [24] A cross-sectional comparative study: ChatGPT 3.5 versus diverse levels of medical experts in the diagnosis of ENT diseases
    Makhoul, Mikhael
    Melkane, Antoine E.
    El Khoury, Patrick
    El Hadi, Christopher
    Matar, Nayla
    EUROPEAN ARCHIVES OF OTO-RHINO-LARYNGOLOGY, 2024, 281 (5) : 2717 - 2721
  • [25] A cross-sectional comparative study: ChatGPT 3.5 versus diverse levels of medical experts in the diagnosis of ENT diseases
    Mikhael Makhoul
    Antoine E. Melkane
    Patrick El Khoury
    Christopher El Hadi
    Nayla Matar
    European Archives of Oto-Rhino-Laryngology, 2024, 281 : 2717 - 2721
  • [26] A comparison of the responses between ChatGPT and doctors in the field of cholelithiasis based on clinical practice guidelines: a cross-sectional study
    Mao, Tianyang
    Zhao, Xin
    Jiang, Kangyi
    Xie, Qingyun
    Yang, Manyu
    Wang, Ruoxuan
    Gao, Fengwei
    DIGITAL HEALTH, 2025, 11
  • [27] Assessing ChatGPT 4.0's test performance and clinical diagnostic accuracy on USMLE STEP 2 CK and clinical case reports
    Shieh, Allen
    Tran, Brandon
    He, Gene
    Kumar, Mudit
    Freed, Jason A.
    Majety, Priyanka
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [28] Assessing the Alignment of Large Language Models With Human Values for Mental Health Integration: Cross-Sectional Study Using Schwartz's Theory of Basic Values
    Hadar-Shoval, Dorit
    Asraf, Kfir
    Mizrachi, Yonathan
    Haber, Yuval
    Elyoseph, Zohar
    JMIR MENTAL HEALTH, 2024, 11
  • [29] Comparing ChatGPT and a Single Anesthesiologist's Responses to Common Patient Questions: An Exploratory Cross-Sectional Survey of a Panel of Anesthesiologists
    Kuo, Frederick H.
    Fierstein, Jamie L.
    Tudor, Brant H.
    Gray, Geoffrey M.
    Ahumada, Luis M.
    Watkins, Scott C.
    Rehman, Mohamed A.
    JOURNAL OF MEDICAL SYSTEMS, 2024, 48 (01)
  • [30] Comparative outcomes of AI-assisted ChatGPT and face-to-face consultations in infertility patients: a cross-sectional study
    Cheng, Shaolong
    Xiao, Yuping
    Liu, Ling
    Sun, Xingyu
    POSTGRADUATE MEDICAL JOURNAL, 2024, 100 (1189) : 851 - 855