Assessing the applicability and appropriateness of ChatGPT in answering clinical pharmacy questions

被引:15
作者
Fournier, A. [1 ]
Fallet, C. [1 ]
Sadeghipour, F. [1 ,2 ,3 ,4 ]
Perrottet, N. [1 ,2 ]
机构
[1] Ctr Hosp Univ Vaudois CHUV, Serv Pharm, Lausanne, Switzerland
[2] Univ Geneva, Univ Lausanne, Sch Pharmaceut Sci, Geneva, Switzerland
[3] Lausanne Univ Hosp, Ctr Res & Innovat Clin Pharmaceut Sci, Lausanne, Switzerland
[4] Univ Lausanne, Lausanne, Switzerland
来源
ANNALES PHARMACEUTIQUES FRANCAISES | 2024年 / 82卷 / 03期
关键词
Artificial intelligence; Large language models; ChatGPT; Clinical pharmacy; Healthcare professionals' issues; RISKS;
D O I
10.1016/j.pharma.2023.11.001
中图分类号
R9 [药学];
学科分类号
1007 ;
摘要
Objectives. - Clinical pharmacists rely on different scientific references to ensure appropriate, safe, and cost-effective drug use. Tools based on artificial intelligence (AI) such as ChatGPT (Generative Pre-trained Transformer) could offer valuable support. The objective of this study was to assess ChatGPT's capacity to correctly respond to clinical pharmacy questions asked by healthcare professionals in our university hospital. Material and methods. - ChatGPT's capacity to respond correctly to the last 100 consecutive questions recorded in our clinical pharmacy database was assessed. Questions were copied from our FileMaker Pro database and pasted into ChatGPT March 14 version online platform. The generated answers were then copied verbatim into an Excel file. Two blinded clinical pharmacists reviewed all the questions and the answers given by the software. In case of disagreements, a third blinded pharmacist intervened to decide. Results. - Documentation-related issues (n n = 36) and drug administration mode (n n = 30) were preponderantly recorded. Among 69 applicable questions, the rate of correct answers varied from 30 to 57.1% depending on questions type with a global rate of 44.9%. Regarding inappropriate answers (n n = 38), 20 were incorrect, 18 gave no answers and 8 were incomplete with 8 answers belonging to 2 different categories. No better answers than the pharmacists were observed. Conclusions. - ChatGPT demonstrated a mitigated performance in answering clinical pharmacy questions. It should not replace human expertise as a high rate of inappropriate answers was highlighted. Future studies should focus on the optimization of ChatGPT for specific clinical pharmacy questions and explore the potential benefits and limitations of integrating this technology into clinical practice. (c) 2023 Acade<acute accent>mie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
引用
收藏
页码:507 / 513
页数:7
相关论文
共 50 条
[31]   Letter 2 regarding "Assessing the performance of ChatGPT in answering questions regarding cirrho- sis and hepatocellular carcinoma" [J].
Kleebayoon, Amnuay ;
Wiwanitkit, Viroj .
CLINICAL AND MOLECULAR HEPATOLOGY, 2023, 29 (03) :815-816
[32]   Evaluating ChatGPT's Performance in Answering Questions About Allergic Rhinitis and Chronic Rhinosinusitis [J].
Ye, Fan ;
Zhang, He ;
Luo, Xin ;
Wu, Tong ;
Yang, Qintai ;
Shi, Zhaohui .
OTOLARYNGOLOGY-HEAD AND NECK SURGERY, 2024, 171 (02) :571-577
[33]   ChatGPT performance in assessing musculoskeletal MRI scan appropriateness based on ACR appropriateness criteria [J].
Tan, Jin Rong ;
Lim, Daniel Y. Z. ;
Le, Quan ;
Karande, Gita Y. ;
Chan, Lai Peng ;
Ng, Yeong Huei ;
Ting, Daniel S. W. ;
Madhavan, Sudharsan ;
Chan, Hiok Yang ;
Tran, Anh N. T. ;
Lai, Yusheng Keefe .
SCIENTIFIC REPORTS, 2025, 15 (01)
[34]   Dr. ChatGPT will see you now: How do Google and ChatGPT compare in answering patient questions on breast reconstruction? [J].
Liu, Hilary Y. ;
Bonetti, Mario Alessandri ;
Jeong, Tiffany ;
Pandya, Sumaarg ;
Nguyen, Vu T. ;
Egro, Francesco M. .
JOURNAL OF PLASTIC RECONSTRUCTIVE AND AESTHETIC SURGERY, 2023, 85 :488-497
[35]   Comparison of the performances between ChatGPT and Gemini in answering questions on viral hepatitis [J].
Sahin Ozdemir, Meryem ;
Ozdemir, Yusuf Emre .
SCIENTIFIC REPORTS, 2025, 15 (01)
[36]   ChatGPT and Clinical Questions on the Practical Guideline of Blepharoptosis: Reply [J].
Shiraishi, Makoto ;
Tomioka, Yoko ;
Okazaki, Mutsumi .
AESTHETIC PLASTIC SURGERY, 2025, 49 (07) :2154-2155
[37]   Assessing ChatGPT Responses to Frequently Asked Patient Questions in Reconstructive Urology [J].
Alshak, Mark N. ;
Cecelic, Johanna ;
Florissi, Isabella ;
Alam, Ridwan ;
Cohen, Andrew J. .
UROLOGY PRACTICE, 2025, 12 (04) :451-458
[38]   Assessing the Capability of ChatGPT in Answering First- and Second-Order Knowledge Questions on Microbiology as per Competency- Based Medical Education Curriculum [J].
Das, Dipmala ;
Kumar, Nikhil ;
Longjam, Langamba Angom ;
Sinha, Ranwir ;
Roy, Asitava Deb ;
Mondal, Himel ;
Gupta, Pratima .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (03)
[39]   Evaluating ChatGPT-3.5 and Claude-2 in Answering and Explaining Conceptual Medical Physiology Multiple-Choice Questions [J].
Agarwal, Mayank ;
Goswami, Ayan ;
Sharma, Priyanka .
CUREUS JOURNAL OF MEDICAL SCIENCE, 2023, 15 (09)
[40]   ChatGPT versus strabismus specialist on common questions about strabismus management: a comparative analysis of appropriateness and readability [J].
Yigit, Didem D. I. Z. D. A. R. ;
Sevik, Mehmet Orkun ;
Aykut, Aslan ;
Cerman, Eren .
MARMARA MEDICAL JOURNAL, 2024, 37 (03) :323-326