Assessing the research landscape and clinical utility of large language models: a scoping review

被引:62
作者
Park, Ye-Jean [1 ]
Pillai, Abhinav [2 ]
Deng, Jiawen [1 ]
Guo, Eddie [2 ]
Gupta, Mehul [2 ]
Paget, Mike [2 ]
Naugler, Christopher [2 ]
机构
[1] Univ Toronto, Temerty Fac Med, 1 Kings Coll Cir, Toronto, ON M5S 1A8, Canada
[2] Univ Calgary, Cumming Sch Med, 3330 Hosp Dr NW, Calgary, AB T2N 4N1, Canada
关键词
Large language models; ChatGPT; Natural language processing; Clinical settings; Scoping review; CHATGPT;
D O I
10.1186/s12911-024-02459-6
中图分类号
R-058 [];
学科分类号
摘要
ImportanceLarge language models (LLMs) like OpenAI's ChatGPT are powerful generative systems that rapidly synthesize natural language responses. Research on LLMs has revealed their potential and pitfalls, especially in clinical settings. However, the evolving landscape of LLM research in medicine has left several gaps regarding their evaluation, application, and evidence base.ObjectiveThis scoping review aims to (1) summarize current research evidence on the accuracy and efficacy of LLMs in medical applications, (2) discuss the ethical, legal, logistical, and socioeconomic implications of LLM use in clinical settings, (3) explore barriers and facilitators to LLM implementation in healthcare, (4) propose a standardized evaluation framework for assessing LLMs' clinical utility, and (5) identify evidence gaps and propose future research directions for LLMs in clinical applications.Evidence reviewWe screened 4,036 records from MEDLINE, EMBASE, CINAHL, medRxiv, bioRxiv, and arXiv from January 2023 (inception of the search) to June 26, 2023 for English-language papers and analyzed findings from 55 worldwide studies. Quality of evidence was reported based on the Oxford Centre for Evidence-based Medicine recommendations.FindingsOur results demonstrate that LLMs show promise in compiling patient notes, assisting patients in navigating the healthcare system, and to some extent, supporting clinical decision-making when combined with human oversight. However, their utilization is limited by biases in training data that may harm patients, the generation of inaccurate but convincing information, and ethical, legal, socioeconomic, and privacy concerns. We also identified a lack of standardized methods for evaluating LLMs' effectiveness and feasibility.Conclusions and relevanceThis review thus highlights potential future directions and questions to address these limitations and to further explore LLMs' potential in enhancing healthcare delivery. Question What is the current state of Large Language Models' (LLMs) application in clinical settings, and what are the primary challenges and opportunities associated with their integration?Findings This scoping review, analyzing 55 studies, indicates that while LLMs, including OpenAI's ChatGPT, show potential in compiling patient notes, aiding in healthcare navigation, and supporting clinical decision-making, their use is constrained by data biases, the generation of plausible but incorrect information, and various ethical and privacy concerns. A significant variability in the rigor of studies, especially in evaluating LLM responses, calls for standardized evaluation methods, including established metrics like ROUGE, METEOR, G-Eval, and MultiMedQA.Meaning The findings suggest a need for enhanced methodologies in LLM research, stressing the importance of integrating real patient data and considering social determinants of health, to improve the applicability and safety of LLMs in clinical environments.
引用
收藏
页数:14
相关论文
共 81 条
[1]   ChatGPT in Clinical Toxicology [J].
Abdel-Messih, Mary Sabry ;
Boulos, Maged N. Kamel .
JMIR MEDICAL EDUCATION, 2023, 9
[2]  
About BGPT, HIPAA compliant ChatGPT
[3]  
Ali R, 2023, medRxiv, DOI [10.1101/2023.05.06.23289615, 10.1101/2023.05.06.23289615, DOI 10.1101/2023.05.06.23289615]
[4]   Using ChatGPT to write patient clinic letters [J].
Ali, Stephen R. ;
Dobbs, Thomas D. ;
Hutchings, Hayley A. ;
Whitaker, Iain S. .
LANCET, 2023, 5 (04) :E179-E181
[5]  
Apple Support, Secure Enclave
[6]   AI chatbots not yet ready for clinical use [J].
Au Yeung, Joshua ;
Kraljevic, Zeljko ;
Luintel, Akish ;
Balston, Alfred ;
Idowu, Esther ;
Dobson, Richard J. J. ;
Teo, James T. T. .
FRONTIERS IN DIGITAL HEALTH, 2023, 5
[7]   Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum [J].
Ayers, John W. ;
Poliak, Adam ;
Dredze, Mark ;
Leas, Eric C. ;
Zhu, Zechariah ;
Kelley, Jessica B. ;
Faix, Dennis J. ;
Goodman, Aaron M. ;
Longhurst, Christopher A. ;
Hogarth, Michael ;
Smith, Davey M. .
JAMA INTERNAL MEDICINE, 2023, 183 (06) :589-596
[9]   Consulting ChatGPT: Ethical dilemmas in language model artificial intelligence [J].
Beltrami, Eric J. ;
Grant-Kels, Jane Margaret .
JOURNAL OF THE AMERICAN ACADEMY OF DERMATOLOGY, 2024, 90 (04) :879-880
[10]  
Brown H, 2022, PROCEEDINGS OF 2022 5TH ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2022, P2280, DOI 10.1145/3531146.3534642