Assessing the applicability and appropriateness of ChatGPT in answering clinical pharmacy questions

被引:10
作者
Fournier, A. [1 ]
Fallet, C. [1 ]
Sadeghipour, F. [1 ,2 ,3 ,4 ]
Perrottet, N. [1 ,2 ]
机构
[1] Ctr Hosp Univ Vaudois CHUV, Serv Pharm, Lausanne, Switzerland
[2] Univ Geneva, Univ Lausanne, Sch Pharmaceut Sci, Geneva, Switzerland
[3] Lausanne Univ Hosp, Ctr Res & Innovat Clin Pharmaceut Sci, Lausanne, Switzerland
[4] Univ Lausanne, Lausanne, Switzerland
来源
ANNALES PHARMACEUTIQUES FRANCAISES | 2024年 / 82卷 / 03期
关键词
Artificial intelligence; Large language models; ChatGPT; Clinical pharmacy; Healthcare professionals' issues; RISKS;
D O I
10.1016/j.pharma.2023.11.001
中图分类号
R9 [药学];
学科分类号
1007 ;
摘要
Objectives. - Clinical pharmacists rely on different scientific references to ensure appropriate, safe, and cost-effective drug use. Tools based on artificial intelligence (AI) such as ChatGPT (Generative Pre-trained Transformer) could offer valuable support. The objective of this study was to assess ChatGPT's capacity to correctly respond to clinical pharmacy questions asked by healthcare professionals in our university hospital. Material and methods. - ChatGPT's capacity to respond correctly to the last 100 consecutive questions recorded in our clinical pharmacy database was assessed. Questions were copied from our FileMaker Pro database and pasted into ChatGPT March 14 version online platform. The generated answers were then copied verbatim into an Excel file. Two blinded clinical pharmacists reviewed all the questions and the answers given by the software. In case of disagreements, a third blinded pharmacist intervened to decide. Results. - Documentation-related issues (n n = 36) and drug administration mode (n n = 30) were preponderantly recorded. Among 69 applicable questions, the rate of correct answers varied from 30 to 57.1% depending on questions type with a global rate of 44.9%. Regarding inappropriate answers (n n = 38), 20 were incorrect, 18 gave no answers and 8 were incomplete with 8 answers belonging to 2 different categories. No better answers than the pharmacists were observed. Conclusions. - ChatGPT demonstrated a mitigated performance in answering clinical pharmacy questions. It should not replace human expertise as a high rate of inappropriate answers was highlighted. Future studies should focus on the optimization of ChatGPT for specific clinical pharmacy questions and explore the potential benefits and limitations of integrating this technology into clinical practice. (c) 2023 Acade<acute accent>mie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.
引用
收藏
页码:507 / 513
页数:7
相关论文
共 50 条
  • [21] Evaluating the performance of ChatGPT in clinical pharmacy: A comparative study of ChatGPT and clinical pharmacists
    Huang, Xiaoru
    Estau, Dannya
    Liu, Xuening
    Yu, Yang
    Qin, Jiguang
    Li, Zijian
    BRITISH JOURNAL OF CLINICAL PHARMACOLOGY, 2024, 90 (01) : 232 - 238
  • [22] Accuracy of ChatGPT3.5 in answering clinical questions on guidelines for severe acute pancreatitis
    Qiu, Jun
    Luo, Li
    Zhou, Youlian
    BMC GASTROENTEROLOGY, 2024, 24 (01)
  • [23] ChatGPT efficacy for answering musculoskeletal anatomy questions: a study evaluating quality and consistency between raters and timepoints
    Mantzou, Nikolaos
    Ediaroglou, Vasileios
    Drakonaki, Elena
    Syggelos, Spyros A.
    Karageorgos, Filippos F.
    Totlis, Trifon
    SURGICAL AND RADIOLOGIC ANATOMY, 2024, 46 (11) : 1885 - 1890
  • [24] Assessing ChatGPT's Responses to Otolaryngology Patient Questions
    Carnino, Jonathan M.
    Pellegrini, William R.
    Willis, Megan
    Cohen, Michael B.
    Paz-Lansberg, Marianella
    Davis, Elizabeth M.
    Grillone, Gregory A.
    Levi, Jessica R.
    ANNALS OF OTOLOGY RHINOLOGY AND LARYNGOLOGY, 2024, 133 (07) : 658 - 664
  • [25] Evaluating the performance of ChatGPT in answering questions related to pediatric urology
    Caglar, Ufuk
    Yildiz, Oguzhan
    Meric, Arda
    Ayranci, Ali
    Gelmis, Mucahit
    Sarilar, Omer
    Ozgor, Faruk
    JOURNAL OF PEDIATRIC UROLOGY, 2024, 20 (01) : 26.e1 - 26.e5
  • [26] Letter 2 regarding "Assessing the performance of ChatGPT in answering questions regarding cirrho- sis and hepatocellular carcinoma"
    Kleebayoon, Amnuay
    Wiwanitkit, Viroj
    CLINICAL AND MOLECULAR HEPATOLOGY, 2023, 29 (03) : 815 - 816
  • [27] Evaluating ChatGPT's Performance in Answering Questions About Allergic Rhinitis and Chronic Rhinosinusitis
    Ye, Fan
    Zhang, He
    Luo, Xin
    Wu, Tong
    Yang, Qintai
    Shi, Zhaohui
    OTOLARYNGOLOGY-HEAD AND NECK SURGERY, 2024, 171 (02) : 571 - 577
  • [28] ChatGPT performance in assessing musculoskeletal MRI scan appropriateness based on ACR appropriateness criteria
    Tan, Jin Rong
    Lim, Daniel Y. Z.
    Le, Quan
    Karande, Gita Y.
    Chan, Lai Peng
    Ng, Yeong Huei
    Ting, Daniel S. W.
    Madhavan, Sudharsan
    Chan, Hiok Yang
    Tran, Anh N. T.
    Lai, Yusheng Keefe
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [29] Dr. ChatGPT will see you now: How do Google and ChatGPT compare in answering patient questions on breast reconstruction?
    Liu, Hilary Y.
    Bonetti, Mario Alessandri
    Jeong, Tiffany
    Pandya, Sumaarg
    Nguyen, Vu T.
    Egro, Francesco M.
    JOURNAL OF PLASTIC RECONSTRUCTIVE AND AESTHETIC SURGERY, 2023, 85 : 488 - 497
  • [30] Comparison of the performances between ChatGPT and Gemini in answering questions on viral hepatitis
    Sahin Ozdemir, Meryem
    Ozdemir, Yusuf Emre
    SCIENTIFIC REPORTS, 2025, 15 (01):