Extracting medication changes in clinical narratives using pre-trained language models

被引:5
|
作者
Ramachandran, Giridhar Kaushik [1 ]
Lybarger, Kevin [1 ]
Liu, Yaya [1 ]
Mahajan, Diwakar [2 ]
Liang, Jennifer J. [2 ]
Tsou, Ching-Huei [2 ]
Yetisgen, Meliha [3 ]
Uzuner, Ozlem [1 ]
机构
[1] George Mason Univ, Dept Informat Sci & Technol, Fairfax, VA 22030 USA
[2] IBM TJ Watson Res Ctr, Yorktown Hts, NY USA
[3] Univ Washington, Dept Biomed Informat & Med Educ, Seattle, WA USA
基金
美国国家卫生研究院;
关键词
Medication information; Machine learning; Natural language processing; Information extraction; AUTOMATIC EXTRACTION; INFORMATION; RECORDS; CORPUS;
D O I
10.1016/j.jbi.2023.104302
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
An accurate and detailed account of patient medications, including medication changes within the patient timeline, is essential for healthcare providers to provide appropriate patient care. Healthcare providers or the patients themselves may initiate changes to patient medication. Medication changes take many forms, including prescribed medication and associated dosage modification. These changes provide information about the overall health of the patient and the rationale that led to the current care. Future care can then build on the resulting state of the patient. This work explores the automatic extraction of medication change information from free-text clinical notes. The Contextual Medication Event Dataset (CMED) is a corpus of clinical notes with annotations that characterize medication changes through multiple change-related attributes, including the type of change (start, stop, increase, etc.), initiator of the change, temporality, change likelihood, and negation. Using CMED, we identify medication mentions in clinical text and propose three novel high-performing BERT-based systems that resolve the annotated medication change characteristics. We demonstrate that our proposed systems improve medication change classification performance over the initial work exploring CMED.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Probing for Hyperbole in Pre-Trained Language Models
    Schneidermann, Nina Skovgaard
    Hershcovich, Daniel
    Pedersen, Bolette Sandford
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-SRW 2023, VOL 4, 2023, : 200 - 211
  • [22] Pre-trained language models in medicine: A survey *
    Luo, Xudong
    Deng, Zhiqi
    Yang, Binxia
    Luo, Michael Y.
    ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
  • [23] Extracting representative subset from extensive text data for training pre-trained language models
    Suzuki, Jun
    Zen, Heiga
    Kazawa, Hideto
    INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
  • [24] Labeling Explicit Discourse Relations Using Pre-trained Language Models
    Kurfali, Murathan
    TEXT, SPEECH, AND DIALOGUE (TSD 2020), 2020, 12284 : 79 - 86
  • [25] Enhancing Turkish Sentiment Analysis Using Pre-Trained Language Models
    Koksal, Omer
    29TH IEEE CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS (SIU 2021), 2021,
  • [26] Automated LOINC Standardization Using Pre-trained Large Language Models
    Tu, Tao
    Loreaux, Eric
    Chesley, Emma
    Lelkes, Adam D.
    Gamble, Paul
    Bellaiche, Mathias
    Seneviratne, Martin
    Chen, Ming-Jun
    MACHINE LEARNING FOR HEALTH, VOL 193, 2022, 193 : 343 - 355
  • [27] Controlling Translation Formality Using Pre-trained Multilingual Language Models
    Rippeth, Elijah
    Agrawal, Sweta
    Carpuat, Marine
    PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE TRANSLATION (IWSLT 2022), 2022, : 327 - 340
  • [28] Repairing Security Vulnerabilities Using Pre-trained Programming Language Models
    Huang, Kai
    Yang, Su
    Sun, Hongyu
    Sun, Chengyi
    Li, Xuejun
    Zhang, Yuqing
    52ND ANNUAL IEEE/IFIP INTERNATIONAL CONFERENCE ON DEPENDABLE SYSTEMS AND NETWORKS WORKSHOP VOLUME (DSN-W 2022), 2022, : 111 - 116
  • [29] A Study of Pre-trained Language Models in Natural Language Processing
    Duan, Jiajia
    Zhao, Hui
    Zhou, Qian
    Qiu, Meikang
    Liu, Meiqin
    2020 IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2020), 2020, : 116 - 121
  • [30] A pre-trained language model for emergency department intervention prediction using routine physiological data and clinical narratives
    Huang, Ting-Yun
    Chong, Chee-Fah
    Lin, Heng-Yu
    Chen, Tzu-Ying
    Chang, Yung-Chun
    Lin, Ming-Chin
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2024, 191