Evaluating performance of large language models for atrial fibrillation management using different prompting strategies and languages

被引:0
作者
Zexi Li [1 ]
Chunyi Yan [2 ]
Ying Cao [1 ]
Aobo Gong [1 ]
Fanghui Li [1 ]
Rui Zeng [1 ]
机构
[1] Sichuan University,Department of Cardiology, West China Hospital
[2] Sichuan University,Department of Pediatric Cardiology, West China Second University Hospital
关键词
Atrial fibrillation; Artificial intelligence; Large language models; Prompt engineering; ChatGPT;
D O I
10.1038/s41598-025-04309-5
中图分类号
学科分类号
摘要
This study evaluated large language models (LLMs) using 30 questions, each derived from a recommendation in the 2024 European Society of Cardiology (ESC) guidelines for atrial fibrillation (AF) management. These recommendations were stratified by class of recommendation and level of evidence. The primary objective was to assess the reliability and consistency of LLM-generated classifications compared to those in the ESC guidelines. Additionally, the study assessed the impact of different prompting strategies and working languages on LLM performance. Three prompting strategies were tested: Input-output (IO), 0-shot-Chain of thought (0-COT) and Performed-Chain of thought (P-COT) prompting. Each question, presented in both English and Chinese, was input into three LLMs: ChatGPT-4o, Claude 3.5 Sonnet, and Gemini 1.5 Pro. The reliability of the different LLM-prompt combinations showed moderate to substantial agreement (Fleiss kappa ranged from 0.449 to 0.763). Claude 3.5 with P-COT prompting had the highest recommendation classification consistency (60.3%). No significant differences were observed between English and Chinese across most LLM-prompt combinations. Bias analysis of inconsistent outcomes revealed a propensity towards more recommended treatments and stronger evidence levels across most LLM-prompt combinations. The characteristics of clinical questions potentially influence LLM performance. This study highlights the limitations in the accuracy of LLM responses to AF-related questions. To gather more comprehensive insights, conducting repeated queries is advisable. Future efforts should focus on expanding the use of diverse prompting strategies, conducting ongoing model evaluation and refinement, and establishing a comprehensive, objective benchmarking system.
引用
收藏
相关论文
empty
未找到相关数据