Patient Triage and Guidance in Emergency Departments Using Large Language Models: Multimetric Study

被引:1
作者
Wang, Chenxu [1 ,2 ]
Wang, Fei [3 ]
Li, Shuhan [2 ]
Ren, Qing-wen [4 ]
Tan, Xiaomei [2 ]
Fu, Yaoyu [1 ]
Liu, Di [1 ,2 ,5 ]
Qian, Guangwu [6 ]
Cao, Yu [1 ,7 ]
Yin, Rong [1 ,2 ,5 ]
Li, Kang [1 ,5 ]
机构
[1] Sichuan Univ, West China Hosp, West China Biomed Big Data Ctr, 37 Guoxue Lane, Chengdu 610041, Peoples R China
[2] Sichuan Univ, Dept Ind Engn, Chengdu, Peoples R China
[3] Sichuan Univ, West China Sch Med, Dept Nursing, Chengdu, Peoples R China
[4] Univ Hong Kong, Queen Mary Hosp, Dept Med, Hong Kong, Peoples R China
[5] Sichuan Univ, Medx Ctr Informat, Chengdu, Peoples R China
[6] Sichuan Univ, Dept Comp Sci, Chengdu, Peoples R China
[7] Sichuan Univ, West China Hosp, Dept Emergency Med, Chengdu, Peoples R China
关键词
ChatGPT; artificial intelligence; patient triage; health care; prompt engineering; large language models; Modified Early Warning Score; EARLY WARNING SCORE; MEWS;
D O I
10.2196/71613
中图分类号
R19 [保健组织与事业(卫生事业管理)];
学科分类号
摘要
Background: Emergency departments (EDs) face significant challenges due to overcrowding, prolonged waiting times, and staff shortages, leading to increased strain on health care systems. Efficient triage systems and accurate departmental guidance are critical for alleviating these pressures. Recent advancements in large language models (LLMs), such as ChatGPT, offer potential solutions for improving patient triage and outpatient department selection in emergency settings. Objective: The study aimed to assess the accuracy, consistency, and feasibility of GPT-4-based ChatGPT models (GPT-4o and GPT-4-Turbo) for patient triage using the Modified Early Warning Score (MEWS) and evaluate GPT-4o's ability to provide accurate outpatient department guidance based on simulated patient scenarios. Methods: A 2-phase experimental study was conducted. In the first phase, 2 ChatGPT models (GPT-4o and GPT-4-Turbo) were evaluated for MEWS-based patient triage accuracy using 1854 simulated patient scenarios. Accuracy and consistency were assessed before and after prompt engineering. In the second phase, GPT-4o was tested for outpatient department selection accuracy using 264 scenarios sourced from the Chinese Medical Case Repository. Each scenario was independently evaluated by GPT-4o thrice. Data analyses included Wilcoxon tests, Kendall correlation coefficients, and logistic regression analyses. Results: In the first phase, ChatGPT's triage accuracy, based on MEWS, improved following prompt engineering. Interestingly, GPT-4-Turbo outperformed GPT-4o. GPT-4-Turbo achieved an accuracy of 100% compared to GPT-4o's accuracy of 96.2%, despite GPT-4o initially showing better performance prior to prompt engineering. This finding suggests that GPT-4-Turbo may be more adaptable to prompt optimization. In the second phase, GPT-4o, with superior performance on emotional responsiveness compared to GPT-4-Turbo, demonstrated an overall guidance accuracy of 92.63% (95% CI 90.34%-94.93%), with the highest accuracy in internal medicine (93.51%, 95% CI 90.85%-96.17%) and the lowest in general surgery (91.46%, 95% CI 86.50%-96.43%). Conclusions:ChatGPT demonstrated promising capability for supporting patient triage and outpatient guidance in EDs. GPT-4-Turbo showed greater adaptability to prompt engineering, whereas GPT-4o exhibited superior responsiveness and emotional interaction, which are essential for patient-facing tasks. Future studies should explore real-world implementation and address the identified limitations to enhance ChatGPT's clinical integration.
引用
收藏
页数:16
相关论文
共 35 条
[1]  
[Anonymous], 2008, Concise Encycl. Stat., P278, DOI [https://doi.org/10.1007/978-0-387-32833-1211, DOI 10.1007/978-0-387-32833-1_211, 10.1007/978-0-387-32833-1211, DOI 10.1007/978-0-387-32833-1211]
[2]  
artificialanalysis, GPT-4o (Nov '24): Intelligence, Performance and Price Analysis
[3]   The global health workforce stock and distribution in 2020 and 2030: a threat to equity and 'universal' health coverage? [J].
Boniol, Mathieu ;
Kunjumen, Teena ;
Nair, Tapas Sadasivan ;
Siyam, Amani ;
Campbell, James ;
Diallo, Khassoum .
BMJ GLOBAL HEALTH, 2022, 7 (06)
[4]   The comparison of modified early warning score with rapid emergency medicine score: a prospective multicentre observational cohort study on medical and surgical patients presenting to emergency department [J].
Bulut, Mehtap ;
Cebicci, Huseyin ;
Sigirli, Deniz ;
Sak, Ahmet ;
Durmus, Oya ;
Top, Ahmet Ali ;
Kaya, Sinan ;
Uz, Kamil .
EMERGENCY MEDICINE JOURNAL, 2014, 31 (06) :476-481
[5]  
Cochran WG., 1965, Biometrische Z, V7, P203, DOI DOI 10.1002/BIMJ.19650070312
[6]   A guide to deep learning in healthcare [J].
Esteva, Andre ;
Robicquet, Alexandre ;
Ramsundar, Bharath ;
Kuleshov, Volodymyr ;
DePristo, Mark ;
Chou, Katherine ;
Cui, Claire ;
Corrado, Greg ;
Thrun, Sebastian ;
Dean, Jeff .
NATURE MEDICINE, 2019, 25 (01) :24-29
[7]   Repeatability, reproducibility, and diagnostic accuracy of a commercial large language model (ChatGPT) to perform emergency department triage using the Canadian triage and acuity scale [J].
Franc, Jeffrey Michael ;
Cheng, Lenard ;
Hart, Alexander ;
Hata, Ryan ;
Hertelendy, Atilla .
CANADIAN JOURNAL OF EMERGENCY MEDICINE, 2024, 26 (01) :40-46
[8]   Is the Modified Early Warning Score (MEWS) superior to clinician judgement in detecting critical illness in the pre-hospital environment? [J].
Fullerton, James N. ;
Price, Charlotte L. ;
Silvey, Natalie E. ;
Brace, Samantha J. ;
Perkins, Gavin D. .
RESUSCITATION, 2012, 83 (05) :557-562
[9]   The value of Modified Early Warning Score (MEWS) in surgical in-patients: a prospective observational study [J].
Gardner-Thorpe, J. ;
Love, N. ;
Wrightson, J. ;
Walsh, S. ;
Keeling, N. .
ANNALS OF THE ROYAL COLLEGE OF SURGEONS OF ENGLAND, 2006, 88 (06) :571-575
[10]   Prompt Engineering with ChatGPT: A Guide for Academic Writers [J].
Giray, Louie .
ANNALS OF BIOMEDICAL ENGINEERING, 2023, 51 (12) :2629-2633