Classifying Unstructured Text in Electronic Health Records for Mental Health Prediction Models: Large Language Model Evaluation Study

被引:0
作者
Cardamone, Nicholas C. [1 ]
Olfson, Mark [2 ]
Schmutte, Timothy [3 ]
Ungar, Lyle [4 ]
Liu, Tony [4 ]
Cullen, Sara W. [5 ]
Williams, Nathaniel J. [6 ]
Marcus, Steven C. [5 ]
机构
[1] Univ Penn, Perelman Sch Med, Dept Psychiat, 3535 Market St, Philadelphia, PA 19104 USA
[2] New York State Psychiat Inst & Hosp, Dept Psychiat, New York, NY USA
[3] Yale Sch Med, Dept Psychiat, New Haven, CT USA
[4] Univ Penn, Comp & Informat Sci, Philadelphia, PA USA
[5] Univ Penn, Sch Social Policy & Practice, Philadelphia, PA USA
[6] Boise State Univ, Sch Social Work, Boise, ID USA
基金
美国国家卫生研究院;
关键词
artificial intelligence; AI; machine learning; ML; natural language processing; NLP; large language model; LLM; ChatGPT; predictive modeling; mental health; health informatics; electronic health record; EHR; EHR system; text; dataset; mental health disorder; emergency department; physical health;
D O I
10.2196/65454
中图分类号
R-058 [];
学科分类号
摘要
Background: Prediction models have demonstrated a range of applications across medicine, including using electronic health record (EHR) data to identify hospital readmission and mortality risk. Large language models (LLMs) can transform unstructured EHR text into structured features, which can then be integrated into statistical prediction models, ensuring that the results are both clinically meaningful and interpretable. Objective: This study aims to compare the classification decisions made by clinical experts with those generated by a state-of-the-art LLM, using terms extracted from a large EHR data set of individuals with mental health disorders seen in emergency departments (EDs). Methods: Using a dataset from the EHR systems of more than 50 health care provider organizations in the United States from 2016 to 2021, we extracted all clinical terms that appeared in at least 1000 records of individuals admitted to the ED for a mental health-related problem from a source population of over 6 million ED episodes. Two experienced mental health clinicians (one medically trained psychiatrist and one clinical psychologist) reached consensus on the classification of EHR terms and diagnostic codes into categories. We evaluated an LLM's agreement with clinical judgment across three classification tasks as follows: (1) classify terms into "mental health" or "physical health", (2) classify mental health terms into 1 of 42 prespecified categories, and (3) classify physical health terms into 1 of 19 prespecified broad categories. Results: There was high agreement between the LLM and clinical experts when categorizing 4553 terms as "mental health" or "physical health" (kappa=0.77, 95% CI 0.75-0.80). However, there was still considerable variability in LLM-clinician agreement on the classification of mental health terms (kappa=0.62, 95% CI 0.59-0.66) and physical health terms (kappa=0.69, 95% CI 0.67-0.70). Conclusions: The LLM displayed high agreement with clinical experts when classifying EHR terms into certain mental health or physical health term categories. However, agreement with clinical experts varied considerably within both sets of mental and physical health term categories. Importantly, the use of LLMs presents an alternative to manual human coding, presenting great potential to create interpretable features for prediction models.
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Identifying Mentions of Pain in Mental Health Records Text: A Natural Language Processing Approach
    Chaturvedi, Jaya
    Velupillai, Sumithra
    Stewart, Robert
    Roberts, Angus
    MEDINFO 2023 - THE FUTURE IS ACCESSIBLE, 2024, 310 : 695 - 699
  • [22] Words prediction based on N-gram model for free-text entry in electronic health records
    Yazdani, Azita
    Safdari, Reza
    Golkar, Ali
    Kalhori, Sharareh R. Niakan
    HEALTH INFORMATION SCIENCE AND SYSTEMS, 2019, 7 (1)
  • [23] Identifying incarceration status in the electronic health record using large language models in emergency department settings
    Huang, Thomas
    Socrates, Vimig
    Gilson, Aidan
    Safranek, Conrad
    Chi, Ling
    Wang, Emily A.
    Puglisi, Lisa B.
    Brandt, Cynthia
    Taylor, R. Andrew
    Wang, Karen
    JOURNAL OF CLINICAL AND TRANSLATIONAL SCIENCE, 2024, 8 (01)
  • [24] Words prediction based on N-gram model for free-text entry in electronic health records
    Azita Yazdani
    Reza Safdari
    Ali Golkar
    Sharareh R. Niakan Kalhori
    Health Information Science and Systems, 7
  • [25] Large language models for accurate disease detection in electronic health records: the examples of crystal arthropathies
    Burgisser, Nils
    Chalot, Etienne
    Mehouachi, Samia
    Buclin, Clement P.
    Lauper, Kim
    Courvoisier, Delphine S.
    Mongin, Denis
    RMD OPEN, 2024, 10 (04):
  • [26] Chain of Risks Evaluation (CORE): A framework for safer large language models in public mental health
    Li, Lingyu
    Kong, Shuqi
    Zhao, Haiquan
    Li, Chunbo
    Teng, Yan
    Wang, Yingchun
    PSYCHIATRY AND CLINICAL NEUROSCIENCES, 2025,
  • [27] Exploring the Credibility of Large Language Models for Mental Health Support: Protocol for a Scoping Review
    Gautam, Dipak
    Kellmeyer, Philipp
    JMIR RESEARCH PROTOCOLS, 2025, 14
  • [28] Evaluation of ChatGPT-4 for the detection of surgical site infections from electronic health records after colorectal surgery: A pilot diagnostic accuracy study
    Badia, Josep M.
    Casanova-Portoles, Daniel
    Membrilla, Estela
    Rubies, Carles
    Pujol, Miquel
    Sancho, Joan
    JOURNAL OF INFECTION AND PUBLIC HEALTH, 2025, 18 (02)
  • [29] Model development for bespoke large language models for digital triage assistance in mental health care
    Taylor, Niall
    Kormilitzin, Andrey
    Lorge, Isabelle
    Nevado-Holgado, Alejo
    Cipriani, Andrea
    Joyce, Dan W.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 157
  • [30] A review of deep learning models and online healthcare databases for electronic health records and their use for health prediction
    Nasarudin, Nurul Athirah
    Al Jasmi, Fatma
    Sinnott, Richard O.
    Zaki, Nazar
    Al Ashwal, Hany
    Mohamed, Elfadil A.
    Mohamad, Mohd Saberi
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (09)