A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis

被引:42
作者
Taher, Mohammad Reza Hosseinzadeh [1 ]
Haghighi, Fatemeh [1 ]
Feng, Ruibin [2 ]
Gotway, Michael B. [3 ]
Liang, Jianming [1 ]
机构
[1] Arizona State Univ, Tempe, AZ 85281 USA
[2] Stanford Univ, Stanford, CA 94305 USA
[3] Mayo Clin, Scottsdale, AZ 85259 USA
来源
DOMAIN ADAPTATION AND REPRESENTATION TRANSFER, AND AFFORDABLE HEALTHCARE AND AI FOR RESOURCE DIVERSE GLOBAL HEALTH (DART 2021) | 2021年 / 12968卷
基金
美国国家科学基金会;
关键词
Transfer learning; ImageNet pre-training; Self-supervised learning; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1007/978-3-030-87722-4_1
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Transfer learning from supervised ImageNet models has been frequently used in medical image analysis. Yet, no large-scale evaluation has been conducted to benchmark the efficacy of newly-developed pretraining techniques for medical image analysis, leaving several important questions unanswered. As the first step in this direction, we conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model. Furthermore, we present a practical approach to bridge the domain gap between natural and medical images by continually (pre-)training supervised ImageNet models on medical images. Our comprehensive evaluation yields new insights: (1) pre-trained models on fine-grained data yield distinctive local representations that are more suitable for medical segmentation tasks, (2) self-supervised ImageNet models learn holistic features more effectively than supervised ImageNet models, and (3) continual pre-training can bridge the domain gap between natural and medical images. We hope that this large-scale open evaluation of transfer learning can direct the future research of deep learning for medical imaging. As open science, all codes and pre-trained models are available on our GitHub page https:// github.comfiLiangLab/BenchmarkTransferLearning.
引用
收藏
页码:3 / 13
页数:11
相关论文
共 32 条
[11]  
Haghighi Fatemeh, 2020, Med Image Comput Comput Assist Interv, V12261, P137, DOI 10.1007/978-3-030-59710-8_14
[12]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[13]  
Horn G.V., 2021, arXiv:2103.16483
[14]  
Irvin J, 2019, AAAI CONF ARTIF INTE, P590
[15]  
Islam A., 2021, A broad study on the transferability of visual representations with contrastive learning
[16]   Two public chest X-ray datasets for computer-aided screening of pulmonary diseases [J].
Jaeger, Stefan ;
Candemir, Sema ;
Antani, Sameer ;
Wang, Yi-Xiang J. ;
Lu, Pu-Xuan ;
Thoma, George .
QUANTITATIVE IMAGING IN MEDICINE AND SURGERY, 2014, 4 (06) :475-477
[17]   XProtoNet: Diagnosis in Chest Radiography with Global and Local Explanations [J].
Kim, Eunji ;
Kim, Siwon ;
Seo, Minji ;
Yoon, Sungroh .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :15714-15723
[18]  
Mustafa B., 2021, ARXIV210105913
[19]   Interpreting chest X-rays via CNNs that exploit hierarchical disease dependencies and uncertainty labels [J].
Pham, Hieu H. ;
Le, Tung T. ;
Tran, Dat Q. ;
Ngo, Dat T. ;
Nguyen, Ha Q. .
NEUROCOMPUTING, 2021, 437 :186-194
[20]   Chest X-ray Bone Suppression for Improving Classification of Tuberculosis-Consistent Findings [J].
Rajaraman, Sivaramakrishnan ;
Zamzmi, Ghada ;
Folio, Les ;
Alderson, Philip ;
Antani, Sameer .
DIAGNOSTICS, 2021, 11 (05)