Information extraction from weakly structured radiological reports with natural language queries

被引:7
作者
Dada, Amin [1 ]
Ufer, Tim Leon [1 ]
Kim, Moon [1 ]
Hasin, Max [1 ]
Spieker, Nicola [2 ]
Forsting, Michael [1 ,3 ]
Nensa, Felix [1 ,3 ]
Egger, Jan [1 ,4 ]
Kleesiek, Jens [1 ,2 ,5 ]
机构
[1] Univ Hosp Essen, Inst AI Med IKIM, Girardetstr 2, D-45131 Essen, Germany
[2] Dr Kruger MVZ GmbH, Bocholt, Germany
[3] Univ Hosp Essen, Inst Diagnost & Intervent Radiol & Neuroradiol, Essen, Germany
[4] Univ Med Essen, Canc Res Ctr Cologne Essen CCCE, Essen, Germany
[5] German Canc Consortium DKTK, Partner Site Essen, Essen, Germany
关键词
Information extraction; Natural language processing; Machine learning;
D O I
10.1007/s00330-023-09977-3
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
ObjectivesProvide physicians and researchers an efficient way to extract information from weakly structured radiology reports with natural language processing (NLP) machine learning models.MethodsWe evaluate seven different German bidirectional encoder representations from transformers (BERT) models on a dataset of 857,783 unlabeled radiology reports and an annotated reading comprehension dataset in the format of SQuAD 2.0 based on 1223 additional reports.ResultsContinued pre-training of a BERT model on the radiology dataset and a medical online encyclopedia resulted in the most accurate model with an F1-score of 83.97% and an exact match score of 71.63% for answerable questions and 96.01% accuracy in detecting unanswerable questions. Fine-tuning a non-medical model without further pre-training led to the lowest-performing model. The final model proved stable against variation in the formulations of questions and in dealing with questions on topics excluded from the training set.ConclusionsGeneral domain BERT models further pre-trained on radiological data achieve high accuracy in answering questions on radiology reports. We propose to integrate our approach into the workflow of medical practitioners and researchers to extract information from radiology reports.Clinical relevance statementBy reducing the need for manual searches of radiology reports, radiologists' resources are freed up, which indirectly benefits patients.Key Points center dot BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports.center dot The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer.center dot Additional radiology-specific pretraining of all investigated BERT models improves their performance.Key Points center dot BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports.center dot The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer.center dot Additional radiology-specific pretraining of all investigated BERT models improves their performance.Key Points center dot BERT models pre-trained on general domain datasets and radiology reports achieve high accuracy (83.97% F1-score) on question-answering for radiology reports.center dot The best performing model achieves an F1-score of 83.97% for answerable questions and 96.01% accuracy for questions without an answer.center dot Additional radiology-specific pretraining of all investigated BERT models improves their performance.
引用
收藏
页码:330 / 337
页数:8
相关论文
共 21 条
[1]  
Alsentzer Emily, 2019, P 2 CLIN NATURAL LAN, DOI DOI 10.18653/V1/W19-1909
[2]  
Baevski A, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P5360
[3]   Highly accurate classification of chest radiographic reports using a deep learning natural language model pre-trained on 3.8 million text reports [J].
Bressem, Keno K. ;
Adams, Lisa C. ;
Gaudin, Robert A. ;
Troeltzsch, Daniel ;
Hamm, Bernd ;
Makowski, Marcus R. ;
Schuele, Chan-Yong ;
Vahldiek, Janis L. ;
Niehues, Stefan M. .
BIOINFORMATICS, 2020, 36 (21) :5255-5261
[4]  
Chan B, 2019, GermanBERT
[5]  
Cotik V., 2021, CLEF 2021 EV LABS WO, P732
[6]  
Datta S, 2020, PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2020), P2251
[7]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[8]   Deep Learning-based Assessment of Oncologic Outcomes from Natural Language Processing of Structured Radiology Reports [J].
Fink, Matthias A. ;
Kades, Klaus ;
Bischoff, Arved ;
Moll, Martin ;
Schnell, Merle ;
Kuechler, Maike ;
Koehler, Gregor ;
Sellner, Jan ;
Heussel, Claus Peter ;
Kauczor, Hans-Ulrich ;
Schlemmer, Heinz-Peter ;
Maier-Hein, Klaus ;
Weber, Tim F. ;
Kleesiek, Jens .
RADIOLOGY-ARTIFICIAL INTELLIGENCE, 2022, 4 (05)
[9]  
Hahn Udo, 2020, Yearb Med Inform, V29, P208, DOI 10.1055/s-0040-1702001
[10]  
Hu MH, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P1596