RadBERT: Adapting Transformer-based Language Models to Radiology

被引:70
作者
Yan, An [1 ]
McAuley, Julian [1 ]
Lu, Xing [1 ]
Du, Jiang [1 ]
Chang, Eric Y. [1 ,2 ]
Gentili, Amilcare [1 ,2 ]
Hsu, Chun-Nan [1 ]
机构
[1] Univ Calif San Diego, 9500 Gilman Dr, La Jolla, CA 92093 USA
[2] Vet Affairs San Diego Healthcare Syst, San Diego, CA USA
基金
美国国家科学基金会;
关键词
Translation; Unsupervised Learning; Transfer Learning; Neural Networks; Informatics;
D O I
10.1148/ryai.210258
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Purpose: To investigate if tailoring a transformer-based language model to radiology is beneficial for radiology natural language processing (NLP) applications. Materials and Methods: This retrospective study presents a family of bidirectional encoder representations from transformers (BERT)based language models adapted for radiology, named RadBERT. Transformers were pretrained with either 2.16 or 4.42 million radiology reports from U.S. Department of Veterans Affairs health care systems nationwide on top of four different initializations (BERTbase, Clinical-BERT, robustly optimized BERT pretraining approach [RoBERTa], and BioMed-RoBERTa) to create six variants of RadBERT. Each variant was fine-tuned for three representative NLP tasks in radiology: (a) abnormal sentence classification: models classified sentences in radiology reports as reporting abnormal or normal findings; (b) report coding: models assigned a diagnostic code to a given radiology report for five coding systems; and (c) report summarization: given the findings section of a radiology report, models selected key sentences that summarized the findings. Model performance was compared by bootstrap resampling with five intensively studied transformer language models as baselines: BERT-base, BioBERT, Clinical-BERT, BlueBERT, and BioMed-RoBERTa. Results: For abnormal sentence classification, all models performed well (accuracies above 97.5 and F1 scores above 95.0). RadBERT variants achieved significantly higher scores than corresponding baselines when given only 10% or less of 12458 annotated training sentences. For report coding, all variants outperformed baselines significantly for all five coding systems. The variant RadBERT-BioMed-RoBERTa performed the best among all models for report summarization, achieving a Recall-Oriented Understudy for Gisting Evaluation-1 score of 16.18 compared with 15.27 by the corresponding baseline (BioMed-RoBERTa, P<.004). Conclusion: Transformer-based language models tailored to radiology had improved performance of radiology NLP tasks compared with baseline transformer language models. (C) RSNA, 2022
引用
收藏
页数:11
相关论文
共 26 条
  • [1] Alsentzer E, 2019, Arxiv, DOI [arXiv:1904.03323, DOI 10.48550/ARXIV.1904.03323]
  • [2] Deep Learning to Classify Radiology Free-Text Reports
    Chen, Matthew C.
    Ball, Robyn L.
    Yang, Lingyao
    Moradzadeh, Nathaniel
    Chapman, Brian E.
    Larson, David B.
    Langlotz, Curtis P.
    Amrhein, Timothy J.
    Lungren, Matthew P.
    [J]. RADIOLOGY, 2018, 286 (03) : 845 - 852
  • [3] Preparing a collection of radiology examinations for distribution and retrieval
    Demner-Fushman, Dina
    Kohli, Marc D.
    Rosenman, Marc B.
    Shooshan, Sonya E.
    Rodriguez, Laritza
    Antani, Sameer
    Thoma, George R.
    McDonald, Clement J.
    [J]. JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION, 2016, 23 (02) : 304 - 310
  • [4] Devlin J, 2019, Arxiv, DOI arXiv:1810.04805
  • [5] How user intelligence is improving PubMed
    Fiorini, Nicolas
    Leaman, Robert
    Lipman, David J.
    Lu, Zhiyong
    [J]. NATURE BIOTECHNOLOGY, 2018, 36 (10) : 937 - 945
  • [6] Identifying and characterizing highly similar notes in big clinical note datasets
    Gabriel, Rodney A.
    Kuo, Tsung-Ting
    McAuley, Julian
    Hsu, Chun-Nan
    [J]. JOURNAL OF BIOMEDICAL INFORMATICS, 2018, 82 : 63 - 69
  • [7] PhysioBank, PhysioToolkit, and PhysioNet - Components of a new research resource for complex physiologic signals
    Goldberger, AL
    Amaral, LAN
    Glass, L
    Hausdorff, JM
    Ivanov, PC
    Mark, RG
    Mietus, JE
    Moody, GB
    Peng, CK
    Stanley, HE
    [J]. CIRCULATION, 2000, 101 (23) : E215 - E220
  • [8] Gururangan S, 2020, Arxiv, DOI arXiv:2004.10964
  • [9] Harzig P, 2019, Arxiv, DOI arXiv:1908.02123
  • [10] MIMIC-III, a freely accessible critical care database
    Johnson, Alistair E. W.
    Pollard, Tom J.
    Shen, Lu
    Lehman, Li-wei H.
    Feng, Mengling
    Ghassemi, Mohammad
    Moody, Benjamin
    Szolovits, Peter
    Celi, Leo Anthony
    Mark, Roger G.
    [J]. SCIENTIFIC DATA, 2016, 3