A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis

被引:40
作者
Taher, Mohammad Reza Hosseinzadeh [1 ]
Haghighi, Fatemeh [1 ]
Feng, Ruibin [2 ]
Gotway, Michael B. [3 ]
Liang, Jianming [1 ]
机构
[1] Arizona State Univ, Tempe, AZ 85281 USA
[2] Stanford Univ, Stanford, CA 94305 USA
[3] Mayo Clin, Scottsdale, AZ 85259 USA
来源
DOMAIN ADAPTATION AND REPRESENTATION TRANSFER, AND AFFORDABLE HEALTHCARE AND AI FOR RESOURCE DIVERSE GLOBAL HEALTH (DART 2021) | 2021年 / 12968卷
基金
美国国家科学基金会;
关键词
Transfer learning; ImageNet pre-training; Self-supervised learning; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1007/978-3-030-87722-4_1
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Transfer learning from supervised ImageNet models has been frequently used in medical image analysis. Yet, no large-scale evaluation has been conducted to benchmark the efficacy of newly-developed pretraining techniques for medical image analysis, leaving several important questions unanswered. As the first step in this direction, we conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model. Furthermore, we present a practical approach to bridge the domain gap between natural and medical images by continually (pre-)training supervised ImageNet models on medical images. Our comprehensive evaluation yields new insights: (1) pre-trained models on fine-grained data yield distinctive local representations that are more suitable for medical segmentation tasks, (2) self-supervised ImageNet models learn holistic features more effectively than supervised ImageNet models, and (3) continual pre-training can bridge the domain gap between natural and medical images. We hope that this large-scale open evaluation of transfer learning can direct the future research of deep learning for medical imaging. As open science, all codes and pre-trained models are available on our GitHub page https:// github.comfiLiangLab/BenchmarkTransferLearning.
引用
收藏
页码:3 / 13
页数:11
相关论文
共 32 条
[21]   Robust segmentation of lung in chest x-ray: applications in analysis of acute respiratory distress syndrome [J].
Reamaroon, Narathip ;
Sjoding, Michael W. ;
Derksen, Harm ;
Sabeti, Elyas ;
Gryak, Jonathan ;
Barbaro, Ryan P. ;
Athey, Brian D. ;
Najarian, Kayvan .
BMC MEDICAL IMAGING, 2020, 20 (01)
[22]  
Reed Colorado J, 2021, ARXIV210312718
[23]   Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning [J].
Shin, Hoo-Chang ;
Roth, Holger R. ;
Gao, Mingchen ;
Lu, Le ;
Xu, Ziyue ;
Nogues, Isabella ;
Yao, Jianhua ;
Mollura, Daniel ;
Summers, Ronald M. .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (05) :1285-1298
[24]   Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? [J].
Tajbakhsh, Nima ;
Shin, Jae Y. ;
Gurudu, Suryakanth R. ;
Hurst, R. Todd ;
Kendall, Christopher B. ;
Gotway, Michael B. ;
Liang, Jianming .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2016, 35 (05) :1299-1312
[25]   Spatiotemporal Pyramid Network for Video Action Recognition [J].
Wang, Yunbo ;
Long, Mingsheng ;
Wang, Jianmin ;
Yu, Philip S. .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2097-2106
[26]  
Wei L., 2020, CAN SEMANTIC LABELS
[27]   Rethinking pre-training on medical imaging [J].
Wen, Yang ;
Chen, Leiting ;
Deng, Yu ;
Zhou, Chuan .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 78
[28]  
Zbontar J, 2021, PR MACH LEARN RES, V139
[29]   Attribute hierarchy based multi-task learning for fine-grained image classification [J].
Zhao, Junjie ;
Peng, Yuxin ;
He, Xiangteng .
NEUROCOMPUTING, 2020, 395 :150-159
[30]  
Zhao N., 2021, ARXIV200606606