Multifaceted Natural Language Processing Task-Based Evaluation of Bidirectional Encoder Representations From Transformers Models for Bilingual (Korean and English) Clinical Notes: Algorithm Development and Validation

被引:0
作者
Kim, Kyungmo [1 ]
Park, Seongkeun [2 ]
Min, Jeongwon [1 ]
Park, Sumin [3 ]
Kim, Ju Yeon [4 ]
Eun, Jinsu [5 ]
Jung, Kyuha [5 ]
Elyson, Yoobin [5 ]
Kim, Esther [5 ]
Lee, Eun Young [4 ]
Lee, Joonhwan [5 ]
Choi, Jinwook [3 ]
机构
[1] Seoul Natl Univ, Interdisciplinary Program Bioengn, Seoul, South Korea
[2] Seoul Natl Univ, Med Res Ctr, Seoul, South Korea
[3] Seoul Natl Univ, Inst Med & Biol Engn, Med Res Ctr, Seoul, South Korea
[4] Seoul Natl Univ Hosp, Dept Internal Med, Div Rheumatol, Seoul, South Korea
[5] Seoul Natl Univ, Human Comp Interact Design Lab, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
natural language processing; NLP; natural language inference; reading comprehension; large language models; transformer; RECOGNITION; EXTRACTION;
D O I
10.2196/52897
中图分类号
R-058 [];
学科分类号
摘要
Background: The bidirectional encoder representations from transformers (BERT) model has attracted considerable attention in clinical applications, such as patient classification and disease prediction. However, current studies have typically progressed to application development without a thorough assessment of the model's comprehension of clinical context. Furthermore, limited comparative studies have been conducted on BERT models using medical documents from non-English-speaking countries. Therefore, the applicability of BERT models trained on English clinical notes to non-English contexts is yet to be confirmed. To address these gaps in literature, this study focused on identifying the most effective BERT model for non-English clinical notes. Objective: In this study, we evaluated the contextual understanding abilities of various BERT models applied to mixed Korean and English clinical notes. The objective of this study was to identify the BERT model that excels in understanding the context of such documents. Methods: Using data from 164,460 patients in a South Korean tertiary hospital, we pretrained BERT-base, BERT for Biomedical Text Mining (BioBERT), Korean BERT (KoBERT), and Multilingual BERT (M-BERT) to improve their contextual comprehension capabilities and subsequently compared their performances in 7 fine-tuning tasks. Results: The model performance varied based on the task and token usage. First, BERT-base and BioBERT excelled in tasks using classification ([CLS]) token embeddings, such as document classification. BioBERT achieved the highest F1-score of 89.32. Both BERT-base and BioBERT demonstrated their effectiveness in document pattern recognition, even with limited Korean tokens in the dictionary. Second, M-BERT exhibited a superior performance in reading comprehension tasks, achieving an F1-score of 93.77. Better results were obtained when fewer words were replaced with unknown ([UNK]) tokens. Third, M-BERT excelled in the knowledge inference task in which correct disease names were inferred from 63 candidate disease names in a document with disease names replaced with [MASK] tokens. M-BERT achieved the highest hit@10 score of 95.41. Conclusions: This study highlighted the effectiveness of various BERT models in a multilingual clinical domain. The findings can be used as a reference in clinical and language-based applications.
引用
收藏
页数:14
相关论文
共 43 条
[1]  
Alsentzer Emily., 2019, Proceedings of the 2nd Clinical Natural Language Processing Workshop, P72, DOI 10.18653/v1/W19-1909
[2]  
[Anonymous], medinfoman/multifaceted-berts: a study that verified the performance of BERT models in clinical text from various perspectives
[3]   An overview of MetaMap: historical perspective and recent advances [J].
Aronson, Alan R. ;
Lang, Francois-Michel .
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2010, 17 (03) :229-236
[4]  
Clark K, 2020, Arxiv, DOI [arXiv:2003.10555, DOI 10.48550/ARXIV.2003.10555]
[5]   Machine-learned solutions for three stages of clinical information extraction: the state of the art at i2b2 2010 [J].
de Bruijn, Berry ;
Cherry, Colin ;
Kiritchenko, Svetlana ;
Martin, Joel ;
Zhu, Xiaodan .
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2011, 18 (05) :557-562
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]  
El Boukkouri H., 2020, P INT 28 C COMP LING, P6903
[8]   2018 n2c2 shared task on adverse drug events and medication extraction in electronic health records [J].
Henry, Sam ;
Buchan, Kevin ;
Filannino, Michele ;
Stubbs, Amber ;
Uzuner, Ozlem .
JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2020, 27 (01) :3-12
[9]  
Hu J, 2022, arXiv
[10]   MIMIC-III, a freely accessible critical care database [J].
Johnson, Alistair E. W. ;
Pollard, Tom J. ;
Shen, Lu ;
Lehman, Li-wei H. ;
Feng, Mengling ;
Ghassemi, Mohammad ;
Moody, Benjamin ;
Szolovits, Peter ;
Celi, Leo Anthony ;
Mark, Roger G. .
SCIENTIFIC DATA, 2016, 3