Question Answering based Clinical Text Structuring Using Pre-trained Language Model

被引:0
作者
Qiu, Jiahui [1 ]
Zhou, Yangming [1 ]
Ma, Zhiyuan [1 ]
Ruan, Tong [1 ]
Liu, Jinlin [1 ]
Sun, Jing [2 ]
机构
[1] East China Univ Sci & Technol, Sch Informat Sci & Engn, Shanghai 200237, Peoples R China
[2] Shanghai Jiao Tong Univ, Ruijin Hosp, Sch Med, Shanghai 200025, Peoples R China
来源
2019 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE (BIBM) | 2019年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Question answering; Clinical text structuring; Pre-trained language model; Electronic health records;
D O I
10.1109/bibm47256.2019.8983142
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Clinical text structuring is a critical and fundamental task for clinical research. Traditional methods such as task-specific end-to-end models and pipeline models usually suffer from the lack of dataset and error propagation. In this paper, we present a question answering based clinical text structuring (QA-CTS) task to unify different specific CTS tasks and make dataset shareable. A novel model that aims to introduce domain-specific features (e.g., clinical named entity information) into pre-trained language model is also proposed for QA-CTS task. Experimental results on Chinese pathology reports collected from Ruijing Hospital demonstrate our presented QA-CTS task is very effective to improve the performance on specific tasks. Our proposed model also competes favorably with strong baseline models in specific tasks.
引用
收藏
页码:1596 / 1600
页数:5
相关论文
共 50 条
[41]   Biomedical-domain pre-trained language model for extractive summarization [J].
Du, Yongping ;
Li, Qingxiao ;
Wang, Lulin ;
He, Yanqing .
KNOWLEDGE-BASED SYSTEMS, 2020, 199 (199)
[42]   Automatic question-answer pairs generation using pre-trained large language models in higher education [J].
Ling J. ;
Afzaal M. .
Computers and Education: Artificial Intelligence, 2024, 6
[43]   LaoPLM: Pre-trained Language Models for Lao [J].
Lin, Nankai ;
Fu, Yingwen ;
Yang, Ziyu ;
Chen, Chuwei ;
Jiang, Shengyi .
LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, :6506-6512
[44]   Classifying Code Comments via Pre-trained Programming Language Model [J].
Li, Ying ;
Wang, Haibo ;
Zhang, Huaien ;
Tan, Shin Hwei .
2023 IEEE/ACM 2ND INTERNATIONAL WORKSHOP ON NATURAL LANGUAGE-BASED SOFTWARE ENGINEERING, NLBSE, 2023, :24-27
[45]   Pre-trained language models in medicine: A survey * [J].
Luo, Xudong ;
Deng, Zhiqi ;
Yang, Binxia ;
Luo, Michael Y. .
ARTIFICIAL INTELLIGENCE IN MEDICINE, 2024, 154
[46]   BSTC: A Fake Review Detection Model Based on a Pre-Trained Language Model and Convolutional Neural Network [J].
Lu, Junwen ;
Zhan, Xintao ;
Liu, Guanfeng ;
Zhan, Xinrong ;
Deng, Xiaolong .
ELECTRONICS, 2023, 12 (10)
[47]   Improving Extraction of Chinese Open Relations Using Pre-trained Language Model and Knowledge Enhancement [J].
Wen, Chaojie ;
Jia, Xudong ;
Chen, Tao .
DATA INTELLIGENCE, 2023, 5 (04) :962-989
[48]   Robust Sentiment Classification of Metaverse Services Using a Pre-trained Language Model with Soft Voting [J].
Lee, Haein ;
Jung, Hae Sun ;
Lee, Seon Hong ;
Kim, Jang Hyun .
KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2023, 17 (09) :2334-2347
[49]   Entity Resolution Based on Pre-trained Language Models with Two Attentions [J].
Zhu, Liang ;
Liu, Hao ;
Song, Xin ;
Wei, Yonggang ;
Wang, Yu .
WEB AND BIG DATA, PT III, APWEB-WAIM 2023, 2024, 14333 :433-448
[50]   BEYOND SIMPLE TEXT STYLE TRANSFER: UNVEILING COMPOUND TEXT STYLE TRANSFER WITH PROMPT-BASED PRE-TRAINED LANGUAGE MODELS [J].
Ju, Shuai ;
Wang, Chenxu .
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, ICASSP 2024, 2024, :6850-6854