Is ChatGPT knowledgeable of acute coronary syndromes and pertinent European Society of Cardiology Guidelines?

被引:10
作者
Gurbuz, Dogac C. [1 ,3 ]
Varis, Eser [2 ]
机构
[1] Gurlife Hosp, Dept Cardiol, Eskisehir, Turkiye
[2] Private Hosp, Dept Cardiol, Istanbul, Turkiye
[3] Fevzi Cakmak St,Akinsel Rd 1, Eskisehir, Turkiye
关键词
Acute coronary syndrome; Artificial intelligence; Mobile applications; GLOBAL REGISTRY;
D O I
10.23736/S2724-5683.24.06517-7
中图分类号
R5 [内科学];
学科分类号
1002 ; 100201 ;
摘要
BACKGROUND: Advancements in artificial intelligence are being seen in multiple fields, including medicine, and this trend is likely to continue going forward. To analyze the accuracy and reproducibility of ChatGPT answers about acute coronary syndromes (ACS). METHODS: The questions asked to ChatGPT were prepared in two categories. A list of frequently asked questions (FAQs) created from inquiries asked by the public and while preparing the scientific question list, 2023 European Society of Cardiology (ESC) Guidelines for the management of ACS and ESC Clinical Practice Guidelines were used. Accuracy and reproducibility of ChatGPT responses about ACS were evaluated by two cardiologists with ten years of experience using Global Quality Score (GQS). RESULTS: Eventually, 72 FAQs related to ACS met the study inclusion criteria. In total, 65 (90.3%) ChatGPT answers scored GQS 5, which indicated highest accuracy and proficiency. None of the ChatGPT responses to FAQs about ACS scored GQS 1. In addition, highest accuracy and reliability of ChatGPT answers was obtained for the prevention and lifestyle section with GQS 5 for 19 (95%) answers, and GQS 4 for 1 (5%) answer. In contrast, accuracy and proficiency of ChatGPT answers were lowest for the treatment and management section. Moreover, 68 (88.3%) ChatGPT responses for guideline based questions scored GQS 5. Reproducibility of ChatGPT answers was 94.4% for FAQs and 90.9% for ESC guidelines questions. CONCLUSIONS: This study shows for the first time that ChatGPT can give accurate and sufficient responses to more than 90% of FAQs about ACS. In addition, proficiency and correctness of ChatGPT answers about questions depending on ESC guidelines was also substantial.
引用
收藏
页码:299 / 303
页数:5
相关论文
共 16 条
[1]   'Fake News' in urology: evaluating the accuracy of articles shared on social media in genitourinary malignancies [J].
Alsyouf, Muhannad ;
Stokes, Phillip ;
Hur, Dan ;
Amasyali, Akin ;
Ruckle, Herbert ;
Hu, Brian .
BJU INTERNATIONAL, 2019, 124 (04) :701-706
[2]   Evaluating the Performance of ChatGPT in Ophthalmology [J].
Antaki, Fares ;
Touma, Samir ;
Milad, Daniel ;
El -Khoury, Jonathan ;
Duval, Renaud .
OPHTHALMOLOGY SCIENCE, 2023, 3 (04)
[3]   Characteristics of YouTube Videos About Peripheral Artery Disease During COVID-19 Pandemic [J].
Baytaroglu, Corc ;
Sevgili, Emrah .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2021, 13 (07)
[4]  
Caglar U, 2023, J PEDIATR UROL
[5]   Social Media Use for Health Purposes: Systematic Review [J].
Chen, Junhan ;
Wang, Yuan .
JOURNAL OF MEDICAL INTERNET RESEARCH, 2021, 23 (05)
[6]   Analyzing the Performance of ChatGPT About Osteoporosis [J].
Cinar, Cigdem .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (09)
[7]   A quality assessment of YouTube content on shoulder instability [J].
Etzel, Christine M. ;
Bokshan, Steven L. ;
Forster, Timothy A. ;
Owens, Brett D. .
PHYSICIAN AND SPORTSMEDICINE, 2022, 50 (04) :289-294
[8]  
Kolansky Daniel M, 2009, Am J Manag Care, V15, pS36
[9]   The evolving epidemiology of acute coronary syndromes [J].
Ruff, Christian T. ;
Braunwald, Eugene .
NATURE REVIEWS CARDIOLOGY, 2011, 8 (03) :140-147
[10]   ChatGPT takes on the European Exam in Core Cardiology: an artificial intelligence success story? [J].
Skalidis, Ioannis ;
Cagnina, Aurelien ;
Luangphiphat, Wongsakorn ;
Mahendiran, Thabo ;
Muller, Olivier ;
Abbe, Emmanuel ;
Fournier, Stephane .
EUROPEAN HEART JOURNAL - DIGITAL HEALTH, 2023, 4 (03) :279-281