A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis

被引:40
作者
Taher, Mohammad Reza Hosseinzadeh [1 ]
Haghighi, Fatemeh [1 ]
Feng, Ruibin [2 ]
Gotway, Michael B. [3 ]
Liang, Jianming [1 ]
机构
[1] Arizona State Univ, Tempe, AZ 85281 USA
[2] Stanford Univ, Stanford, CA 94305 USA
[3] Mayo Clin, Scottsdale, AZ 85259 USA
来源
DOMAIN ADAPTATION AND REPRESENTATION TRANSFER, AND AFFORDABLE HEALTHCARE AND AI FOR RESOURCE DIVERSE GLOBAL HEALTH (DART 2021) | 2021年 / 12968卷
基金
美国国家科学基金会;
关键词
Transfer learning; ImageNet pre-training; Self-supervised learning; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1007/978-3-030-87722-4_1
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Transfer learning from supervised ImageNet models has been frequently used in medical image analysis. Yet, no large-scale evaluation has been conducted to benchmark the efficacy of newly-developed pretraining techniques for medical image analysis, leaving several important questions unanswered. As the first step in this direction, we conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model. Furthermore, we present a practical approach to bridge the domain gap between natural and medical images by continually (pre-)training supervised ImageNet models on medical images. Our comprehensive evaluation yields new insights: (1) pre-trained models on fine-grained data yield distinctive local representations that are more suitable for medical segmentation tasks, (2) self-supervised ImageNet models learn holistic features more effectively than supervised ImageNet models, and (3) continual pre-training can bridge the domain gap between natural and medical images. We hope that this large-scale open evaluation of transfer learning can direct the future research of deep learning for medical imaging. As open science, all codes and pre-trained models are available on our GitHub page https:// github.comfiLiangLab/BenchmarkTransferLearning.
引用
收藏
页码:3 / 13
页数:11
相关论文
共 32 条
[1]  
[Anonymous], 2019, SIIM-ACR Pneumothorax Segmentation
[2]  
[Anonymous], 2020, RSNA STR PULMONARY E
[3]  
Azizi S, 2021, ARXIV210105224
[4]   Robust Vessel Segmentation in Fundus Images [J].
Budai, A. ;
Bock, R. ;
Maier, A. ;
Hornegger, J. ;
Michelson, G. .
INTERNATIONAL JOURNAL OF BIOMEDICAL IMAGING, 2013, 2013 (2013)
[5]  
Caron M., 2021, ARXIV200609882
[6]   The Devil is in the Channels: Mutual-Channel Loss for Fine-Grained Image Classification [J].
Chang, Dongliang ;
Ding, Yifeng ;
Xie, Jiyang ;
Bhunia, Ayan Kumar ;
Li, Xiaoxu ;
Ma, Zhanyu ;
Wu, Ming ;
Guo, Jun ;
Song, Yi-Zhe .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4683-4695
[7]   How Well Do Self-Supervised Models Transfer? [J].
Ericsson, Linus ;
Gouk, Henry ;
Hospedales, Timothy M. .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :5410-5419
[8]  
Gururangan S., 2020, P 58 ANN M ASS COMP, P8342, DOI [DOI 10.18653/V1/2020.ACL-MAIN, 10.18653/v1, 10.18653/v1/2020.acl-main.740, 10.18653/v1/2020.aclmain.740]
[9]   Transferable Visual Words: Exploiting the Semantics of Anatomical Patterns for Self-Supervised Learning [J].
Haghighi, Fatemeh ;
Taher, Mohammad Reza Hosseinzadeh ;
Zhou, Zongwei ;
Gotway, Michael B. ;
Liang, Jianming .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2021, 40 (10) :2857-2868
[10]  
Haghighi Fatemeh, 2020, Med Image Comput Comput Assist Interv, V12261, P137, DOI 10.1007/978-3-030-59710-8_14