Assessing unknown potential-quality and limitations of different large language models in the field of otorhinolaryngology

被引:2
作者
Buhr, Christoph R. [1 ,2 ]
Smith, Harry [3 ]
Huppertz, Tilman [1 ]
Bahr-Hamm, Katharina [1 ]
Matthias, Christoph [1 ]
Cuny, Clemens [4 ]
Snijders, Jan Phillipp [4 ]
Ernst, Benjamin Philipp [5 ]
Blaikie, Andrew [2 ]
Kelsey, Tom [3 ]
Kuhn, Sebastian [6 ]
Eckrich, Jonas [1 ]
机构
[1] Johannes Gutenberg Univ Mainz, Univ Med Ctr, Dept Otorhinolaryngol, Langenbeckstr 1, D-55131 Mainz, Rhineland Palat, Germany
[2] Univ St Andrews, Sch Med, St Andrews, Scotland
[3] Univ St Andrews, Sch Comp Sci, St Andrews, Scotland
[4] Outpatient Clin, Dieburg, Germany
[5] Univ Hosp Frankfurt, Dept Otorhinolaryngol, Frankfurt, Germany
[6] Philipps Univ Marburg, Univ Hosp Giessen & Marburg, Inst Digital Med, Marburg, Germany
关键词
Large language models; artificial intelligence; ChatGPT; Bard; Claude; otorhinolaryngology; digital health; chatbots; global health; chatbot; CHALLENGES; HEALTH;
D O I
10.1080/00016489.2024.2352843
中图分类号
R76 [耳鼻咽喉科学];
学科分类号
100213 ;
摘要
Background: Large Language Models (LLMs) might offer a solution for the lack of trained health personnel, particularly in low- and middle-income countries. However, their strengths and weaknesses remain unclear. Aims/objectives: Here we benchmark different LLMs (Bard 2023.07.13, Claude 2, ChatGPT 4) against six consultants in otorhinolaryngology (ORL). Material and methods: Case-based questions were extracted from literature and German state examinations. Answers from Bard 2023.07.13, Claude 2, ChatGPT 4, and six ORL consultants were rated blindly on a 6-point Likert-scale for medical adequacy, comprehensibility, coherence, and conciseness. Given answers were compared to validated answers and evaluated for hazards. A modified Turing test was performed and character counts were compared. Results: LLMs answers ranked inferior to consultants in all categories. Yet, the difference between consultants and LLMs was marginal, with the clearest disparity in conciseness and the smallest in comprehensibility. Among LLMs Claude 2 was rated best in medical adequacy and conciseness. Consultants' answers matched the validated solution in 93% (228/246), ChatGPT 4 in 85% (35/41), Claude 2 in 78% (32/41), and Bard 2023.07.13 in 59% (24/41). Answers were rated as potentially hazardous in 10% (24/246) for ChatGPT 4, 14% (34/246) for Claude 2, 19% (46/264) for Bard 2023.07.13, and 6% (71/1230) for consultants. Conclusions and significance: Despite consultants superior performance, LLMs show potential for clinical application in ORL. Future studies should assess their performance on larger scale.
引用
收藏
页码:237 / 242
页数:6
相关论文
共 50 条
[31]   Assessing the research landscape and clinical utility of large language models: a scoping review [J].
Ye-Jean Park ;
Abhinav Pillai ;
Jiawen Deng ;
Eddie Guo ;
Mehul Gupta ;
Mike Paget ;
Christopher Naugler .
BMC Medical Informatics and Decision Making, 24
[32]   Benchmarking AutoGen with different large language models [J].
Barbarroxa, Rafael ;
Ribeiro, Bruno ;
Gomes, Luis ;
Vale, Zita .
2024 IEEE CONFERENCE ON ARTIFICIAL INTELLIGENCE, CAI 2024, 2024, :263-264
[33]   Exploring the Potential of Large Language Models to Understand Interpersonal Emotion Regulation Strategies From Narratives [J].
Lopez-Perez, Belen ;
Chen, Yuhui ;
Li, Xiuhui ;
Cheng, Shixing ;
Razavi, Pooya .
EMOTION, 2025,
[34]   Large AI Models and Their Applications: Classification, Limitations, and Potential Solutions [J].
Bi, Jing ;
Wang, Ziqi ;
Yuan, Haitao ;
Shi, Xiankun ;
Wang, Ziyue ;
Zhang, Jia ;
Zhou, Mengchu ;
Buyya, Rajkumar .
SOFTWARE-PRACTICE & EXPERIENCE, 2025, :1003-1017
[35]   Potential use of large language models for mitigating students' problematic social media use: ChatGPT as an example [J].
Liu, Xin-Qiao ;
Zhang, Zi-Ru .
WORLD JOURNAL OF PSYCHIATRY, 2024, 14 (03)
[36]   Evaluating performance of large language models for atrial fibrillation management using different prompting strategies and languages [J].
Zexi Li ;
Chunyi Yan ;
Ying Cao ;
Aobo Gong ;
Fanghui Li ;
Rui Zeng .
Scientific Reports, 15 (1)
[37]   Using Large Language Models in the Diagnosis of Acute Cholecystitis: Assessing Accuracy and Guidelines Compliance [J].
Goglia, Marta ;
Cicolani, Arianna ;
Carrano, Francesco Maria ;
Petrucciani, Niccolo ;
D'Angelo, Francesco ;
Pace, Marco ;
Chiarini, Lucio ;
Silecchia, Gianfranco ;
Aurello, Paolo .
AMERICAN SURGEON, 2025,
[38]   Can large language models provide accurate and quality information to parents regarding chronic kidney diseases? [J].
Naz, Ruya ;
Akaci, Okan ;
Erdogan, Hakan ;
Acikgoz, Ayfer .
JOURNAL OF EVALUATION IN CLINICAL PRACTICE, 2024, 30 (08) :1556-1564
[39]   Advancing Quality Assessment in Vertical Field: Scoring Calculation for Text Inputs to Large Language Models [J].
Yi, Jun-Kai ;
Yao, Yi-Fan .
APPLIED SCIENCES-BASEL, 2024, 14 (16)
[40]   Assessing Phrase Break of ESL Speech with Pre-trained Language Models and Large Language Models [J].
Wang, Zhiyi ;
Mao, Shaoguang ;
Wu, Wenshan ;
Xia, Yan ;
Deng, Yan ;
Tien, Jonathan .
INTERSPEECH 2023, 2023, :4194-4198