Assessing ChatGPT's Diagnostic Accuracy and Therapeutic Strategies in Oral Pathologies: A Cross-Sectional Study

被引:2
|
作者
Uranbey, Oemer [1 ]
Ozbey, Furkan [2 ]
Kaygisiz, Omer [3 ]
Ayranci, Ferhat [1 ]
机构
[1] Ordu Univ, Oral & Maxillofacial Surg, Ordu, Turkiye
[2] Ordu Univ, Oral & Maxillofacial Radiol, Ordu, Turkiye
[3] Gaziantep Univ, Oral & Maxillofacial Surg, Gaziantep, Turkiye
关键词
oral surgery; large language model; oral pathologies; chatgpt; artificial intelligence in healthcare;
D O I
10.7759/cureus.58607
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
Background: The rapid adoption of artificial intelligence (AI) models in the medical field is due to their ability to collaborate with clinicians in the diagnosis and management of a wide range of conditions. This research assesses the diagnostic accuracy and therapeutic strategies of Chat Generative Pre -trained Transformer (ChatGPT) in comparison to dental professionals across 12 clinical cases. Methodology: ChatGPT 3.5 was queried for diagnoses and management plans for 12 retrospective cases. Physicians were tasked with rating the complexity of clinical scenarios and their agreement with the ChatGPT responses using a five -point Likert scale. Comparisons were made between the complexity of the cases and the accuracy of the diagnoses and treatment plans. Results: ChatGPT exhibited high accuracy in providing differential diagnoses and acceptable treatment plans. In a survey involving 30 attending physicians, scenarios were rated with an overall median difficulty level of 3, showing acceptable agreement with ChatGPT's differential diagnosis accuracy (overall median 4). Our study revealed lower diagnosis scores correlating with decreased treatment management scores, as demonstrated by univariate ordinal regression analysis. Conclusions: ChatGPT's rapid processing aids healthcare by offering an objective, evidence -based approach, reducing human error and workload. However, potential biases may affect outcomes and challenge lessexperienced practitioners. AI in healthcare, including ChatGPT, is still evolving, and further research is needed to understand its full potential in analyzing clinical information, establishing diagnoses, and suggesting treatments.
引用
收藏
页数:17
相关论文
共 50 条
  • [31] User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study
    Shahsavar, Yeganeh
    Choudhury, Avishek
    JMIR HUMAN FACTORS, 2023, 10
  • [32] Performance of ChatGPT Compared to Clinical Practice Guidelines in Making Informed Decisions for Lumbosacral Radicular Pain: A Cross-sectional Study
    Gianola, Silvia
    Bargeri, Silvia
    Castellini, Greta
    Cook, Chad
    Palese, Alvisa
    Pillastrini, Paolo
    Salvalaggio, Silvia
    Turolla, Andrea
    Rossettini, Giacomo
    JOURNAL OF ORTHOPAEDIC & SPORTS PHYSICAL THERAPY, 2024, 54 (03) : 222 - 228
  • [33] Exploring the potential of ChatGPT for clinical reasoning and decision-making: a cross-sectional study on the Italian Medical Residency Exam
    Scaioli, Giacomo
    Lo Moro, Giuseppina
    Conrado, Francesco
    Rosset, Lorenzo
    Bert, Fabrizio
    Siliquini, Roberta
    ANNALI DELL ISTITUTO SUPERIORE DI SANITA, 2023, 59 (04): : 267 - 270
  • [34] Assessing ChatGPT's theoretical knowledge and prescriptive accuracy in bacterial infections: a comparative study with infectious diseases residents and specialists
    De Vito, Andrea
    Geremia, Nicholas
    Marino, Andrea
    Bavaro, Davide Fiore
    Caruana, Giorgia
    Meschiari, Marianna
    Colpani, Agnese
    Mazzitelli, Maria
    Scaglione, Vincenzo
    Venanzi Rullo, Emmanuele
    Fiore, Vito
    Fois, Marco
    Campanella, Edoardo
    Pistara, Eugenia
    Faltoni, Matteo
    Nunnari, Giuseppe
    Cattelan, Annamaria
    Mussini, Cristina
    Bartoletti, Michele
    Vaira, Luigi Angelo
    Madeddu, Giordano
    INFECTION, 2024, : 873 - 881
  • [35] Evaluating the agreement between ChatGPT-4 and validated questionnaires in screening for anxiety and depression in college students: a cross-sectional study
    Jiali Liu
    Juan Gu
    Mengjie Tong
    Yake Yue
    Yufei Qiu
    Lijuan Zeng
    Yiqing Yu
    Fen Yang
    Shuyan Zhao
    BMC Psychiatry, 25 (1)
  • [36] Can ChatGPT provide health information as physicians do? Preliminary findings from a cross-sectional study of online medical consultation
    Luo, Siqi
    Qin, Hongyi
    Li, Hanlin
    Huang, Cui
    INFORMATION RESEARCH-AN INTERNATIONAL ELECTRONIC JOURNAL, 2024, 29 (02): : 419 - 426
  • [37] Evaluating ChatGPT-4's Accuracy in Identifying Final Diagnoses Within Differential Diagnoses Compared With Those of Physicians: Experimental Study for Diagnostic Cases
    Hirosawa, Takanobu
    Harada, Yukinori
    Mizuta, Kazuya
    Sakamoto, Tetsu
    Tokumasu, Kazuki
    Shimizu, Taro
    JMIR FORMATIVE RESEARCH, 2024, 8
  • [38] Four-year cross-sectional study of bleeding risk in dental patients on direct oral anticoagulants
    Zeevi, Itai
    Rosenfeld, Eli
    Avishai, Gal
    Gilman, Leon
    Nissan, Joseph
    Chaushu, Gabi
    QUINTESSENCE INTERNATIONAL, 2017, 48 (06): : 503 - 509
  • [39] Comparisons of Quality, Correctness, and Similarity Between ChatGPT-Generated and Human-Written Abstracts for Basic Research: Cross-Sectional Study
    Cheng, Shu-Li
    Tsai, Shih-Jen
    Bai, Ya-Mei
    Ko, Chih-Hung
    Hsu, Chih-Wei
    Yang, Fu-Chi
    Tsai, Chia-Kuang
    Tu, Yu-Kang
    Yang, Szu-Nian
    Tseng, Ping-Tao
    Hsu, Tien-Wei
    Liang, Chih-Sung
    Su, Kuan-Pin
    JOURNAL OF MEDICAL INTERNET RESEARCH, 2023, 25
  • [40] Healthcare students attitudes opinions perceptions and perceived obstacles regarding ChatGPT in Saudi Arabia: a survey-based cross-sectional study
    Alharbi, Mohammad K.
    Syed, Wajid
    Innab, Adnan
    Al-Rawi, Mahmood Basil A.
    Alsadoun, Ahmed
    Bashatah, Adel
    SCIENTIFIC REPORTS, 2024, 14 (01):