Extracting medication changes in clinical narratives using pre-trained language models

被引:5
|
作者
Ramachandran, Giridhar Kaushik [1 ]
Lybarger, Kevin [1 ]
Liu, Yaya [1 ]
Mahajan, Diwakar [2 ]
Liang, Jennifer J. [2 ]
Tsou, Ching-Huei [2 ]
Yetisgen, Meliha [3 ]
Uzuner, Ozlem [1 ]
机构
[1] George Mason Univ, Dept Informat Sci & Technol, Fairfax, VA 22030 USA
[2] IBM TJ Watson Res Ctr, Yorktown Hts, NY USA
[3] Univ Washington, Dept Biomed Informat & Med Educ, Seattle, WA USA
基金
美国国家卫生研究院;
关键词
Medication information; Machine learning; Natural language processing; Information extraction; AUTOMATIC EXTRACTION; INFORMATION; RECORDS; CORPUS;
D O I
10.1016/j.jbi.2023.104302
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
An accurate and detailed account of patient medications, including medication changes within the patient timeline, is essential for healthcare providers to provide appropriate patient care. Healthcare providers or the patients themselves may initiate changes to patient medication. Medication changes take many forms, including prescribed medication and associated dosage modification. These changes provide information about the overall health of the patient and the rationale that led to the current care. Future care can then build on the resulting state of the patient. This work explores the automatic extraction of medication change information from free-text clinical notes. The Contextual Medication Event Dataset (CMED) is a corpus of clinical notes with annotations that characterize medication changes through multiple change-related attributes, including the type of change (start, stop, increase, etc.), initiator of the change, temporality, change likelihood, and negation. Using CMED, we identify medication mentions in clinical text and propose three novel high-performing BERT-based systems that resolve the annotated medication change characteristics. We demonstrate that our proposed systems improve medication change classification performance over the initial work exploring CMED.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Emotional Paraphrasing Using Pre-trained Language Models
    Casas, Jacky
    Torche, Samuel
    Daher, Karl
    Mugellini, Elena
    Abou Khaled, Omar
    2021 9TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION WORKSHOPS AND DEMOS (ACIIW), 2021,
  • [2] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    ENGINEERING, 2023, 25 : 51 - 65
  • [3] Development of pre-trained language models for clinical NLP in Spanish
    Aracena, Claudio
    Dunstan, Jocelyn
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 52 - 60
  • [4] Pre-trained Biomedical Language Models for Clinical NLP in Spanish
    Pio Carrino, Casimiro
    Llop, Joan
    Pamies, Marc
    Gutierrez-Fandino, Asier
    Armengol-Estape, Jordi
    Silveira-Ocampo, Joaquin
    Aitor Gonzalez-Agirre, Alfonso Valencia
    Villegas, Marta
    PROCEEDINGS OF THE 21ST WORKSHOP ON BIOMEDICAL LANGUAGE PROCESSING (BIONLP 2022), 2022, : 193 - 199
  • [5] Using Pre-Trained Language Models for Producing Counter Narratives Against Hate Speech: a Comparative Study
    Tekiroglu, Serra Sinem
    Bonaldi, Helena
    Fanton, Margherita
    Guerini, Marco
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), 2022, : 3099 - 3114
  • [6] μBERT: Mutation Testing using Pre-Trained Language Models
    Degiovanni, Renzo
    Papadakis, Mike
    2022 IEEE 15TH INTERNATIONAL CONFERENCE ON SOFTWARE TESTING, VERIFICATION AND VALIDATION WORKSHOPS (ICSTW 2022), 2022, : 160 - 169
  • [7] Devulgarization of Polish Texts Using Pre-trained Language Models
    Klamra, Cezary
    Wojdyga, Grzegorz
    Zurowski, Sebastian
    Rosalska, Paulina
    Kozlowska, Matylda
    Ogrodniczuk, Maciej
    COMPUTATIONAL SCIENCE, ICCS 2022, PT II, 2022, : 49 - 55
  • [8] MERGEDISTILL: Merging Pre-trained Language Models using Distillation
    Khanuja, Simran
    Johnson, Melvin
    Talukdar, Partha
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2874 - 2887
  • [9] Low Resource Summarization using Pre-trained Language Models
    Munaf, Mubashir
    Afzal, Hammad
    Mahmood, Khawir
    Iltaf, Naima
    ACM TRANSACTIONS ON ASIAN AND LOW-RESOURCE LANGUAGE INFORMATION PROCESSING, 2024, 23 (10)
  • [10] Issue Report Classification Using Pre-trained Language Models
    Colavito, Giuseppe
    Lanubile, Filippo
    Novielli, Nicole
    2022 IEEE/ACM 1ST INTERNATIONAL WORKSHOP ON NATURAL LANGUAGE-BASED SOFTWARE ENGINEERING (NLBSE 2022), 2022, : 29 - 32