BioBERT: a pre-trained biomedical language representation model for biomedical text mining

被引:3301
作者
Lee, Jinhyuk [1 ]
Yoon, Wonjin [1 ]
Kim, Sungdong [2 ]
Kim, Donghyeon [1 ]
Kim, Sunkyu [1 ]
So, Chan Ho [3 ]
Kang, Jaewoo [1 ,3 ]
机构
[1] Korea Univ, Dept Comp Sci & Engn, Seoul 02841, South Korea
[2] Naver Corp, Clova Res, Seongnam 13561, South Korea
[3] Korea Univ, Interdisciplinary Grad Program Bioinformat, Seoul 02841, South Korea
基金
新加坡国家研究基金会;
关键词
RECOGNITION; CORPUS;
D O I
10.1093/bioinformatics/btz682
中图分类号
Q5 [生物化学];
学科分类号
071010 ; 081704 ;
摘要
Motivation: Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. Results: We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts.
引用
收藏
页码:1234 / 1240
页数:7
相关论文
共 38 条
[1]  
Alsentzer E., 2019, P 2 CLIN NAT LANG PR, DOI DOI 10.18653/V1/W19-1909
[2]  
[Anonymous], 2008, Genome Biology
[3]  
[Anonymous], J CHEMINFORM
[4]  
[Anonymous], 2004, P INT JOINT WORKSH N
[5]  
[Anonymous], 2013, NIPS
[6]  
[Anonymous], 2016, P C ASS MACH TRANSL
[7]  
[Anonymous], 2017, ADV NEURAL INFORM PR
[8]   Automatic extraction of gene-disease associations from literature using joint ensemble learning [J].
Bhasuran, Balu ;
Natarajan, Jeyakumar .
PLOS ONE, 2018, 13 (07)
[9]   Extraction of relations between genes and diseases from text and large-scale data analysis: implications for translational research [J].
Bravo, Alex ;
Pinero, Janet ;
Queralt-Rosinach, Nuria ;
Rautschka, Michael ;
Furlong, Laura I. .
BMC BIOINFORMATICS, 2015, 16
[10]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171