Question Answering based Clinical Text Structuring Using Pre-trained Language Model

被引:0
作者
Qiu, Jiahui [1 ]
Zhou, Yangming [1 ]
Ma, Zhiyuan [1 ]
Ruan, Tong [1 ]
Liu, Jinlin [1 ]
Sun, Jing [2 ]
机构
[1] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai 200237, Peoples R China
[2] Shanghai Jiao Tong Univ, Ruijin Hosp, Sch Med, Shanghai 200025, Peoples R China
来源
2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM) | 2019年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Question answering; Clinical text structuring; Pre-trained language model; Electronic health records;
D O I
10.1109/bibm47256.2019.8983142
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Clinical text structuring is a critical and fundamental task for clinical research. Traditional methods such as task-specific end-to-end models and pipeline models usually suffer from the lack of dataset and error propagation. In this paper, we present a question answering based clinical text structuring (QA-CTS) task to unify different specific CTS tasks and make dataset shareable. A novel model that aims to introduce domain-specific features (e.g., clinical named entity information) into pre-trained language model is also proposed for QA-CTS task. Experimental results on Chinese pathology reports collected from Ruijing Hospital demonstrate our presented QA-CTS task is very effective to improve the performance on specific tasks. Our proposed model also competes favorably with strong baseline models in specific tasks.
引用
收藏
页码:1596 / 1600
页数:5
相关论文
共 50 条
  • [31] A pre-trained language model for emergency department intervention prediction using routine physiological data and clinical narratives
    Huang, Ting-Yun
    Chong, Chee-Fah
    Lin, Heng-Yu
    Chen, Tzu-Ying
    Chang, Yung-Chun
    Lin, Ming-Chin
    INTERNATIONAL JOURNAL OF MEDICAL INFORMATICS, 2024, 191
  • [32] ARoBERT: An ASR Robust Pre-Trained Language Model for Spoken Language Understanding
    Wang, Chengyu
    Dai, Suyang
    Wang, Yipeng
    Yang, Fei
    Qiu, Minghui
    Chen, Kehan
    Zhou, Wei
    Huang, Jun
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 1207 - 1218
  • [33] Satellite and instrument entity recognition using a pre-trained language model with distant supervision
    Lin, Ming
    Jin, Meng
    Liu, Yufu
    Bai, Yuqi
    INTERNATIONAL JOURNAL OF DIGITAL EARTH, 2022, 15 (01) : 1290 - 1304
  • [34] Towards automatic question generation using pre-trained model in academic field for Bahasa Indonesia
    Suhartono, Derwin
    Majiid, Muhammad Rizki Nur
    Fredyan, Renaldy
    EDUCATION AND INFORMATION TECHNOLOGIES, 2024, 29 (16) : 21295 - 21330
  • [35] Multi-task Learning based Pre-trained Language Model for Code Completion
    Liu, Fang
    Li, Ge
    Zhao, Yunfei
    Jin, Zhi
    2020 35TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING (ASE 2020), 2020, : 473 - 485
  • [36] SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-Trained Language Model
    Sun, Yi
    Qiu, Hangping
    Zheng, Yu
    Wang, Zhongwei
    Zhang, Chaoran
    IEEE ACCESS, 2020, 8 : 10896 - 10906
  • [37] Talent Supply and Demand Matching Based on Prompt Learning and the Pre-Trained Language Model
    Li, Kunping
    Liu, Jianhua
    Zhuang, Cunbo
    APPLIED SCIENCES-BASEL, 2025, 15 (05):
  • [38] Named-Entity Recognition for a Low-resource Language using Pre-Trained Language Model
    Yohannes, Hailemariam Mehari
    Amagasa, Toshiyuki
    37TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, 2022, : 837 - 844
  • [39] Pre-trained language model augmented adversarial training network for Chinese clinical event detection
    Zhang, Zhichang
    Zhang, Minyu
    Zhou, Tong
    Qiu, Yanlong
    MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2020, 17 (04) : 2825 - 2841
  • [40] Lawformer: A pre-trained language model for Chinese legal long documents
    Xiao, Chaojun
    Hu, Xueyu
    Liu, Zhiyuan
    Tu, Cunchao
    Sun, Maosong
    AI OPEN, 2021, 2 : 79 - 84