Large-scale benchmarking and boosting transfer learning for medical image analysis

被引:0
作者
Taher, Mohammad Reza Hosseinzadeh [1 ]
Haghighi, Fatemeh [1 ]
Gotway, Michael B. [2 ]
Liang, Jianming [1 ]
机构
[1] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ 85281 USA
[2] Mayo Clin, Dept Radiol, Scottsdale, AZ 85259 USA
基金
美国国家科学基金会;
关键词
Benchmarking; Transfer Learning; CNNs and Vision transformer; Medical Imaging; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1016/j.media.2025.103487
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer learning, particularly fine-tuning models pretrained on photographic images to medical images, has proven indispensable for medical image analysis. There are numerous models with distinct architectures pretrained on various datasets using different strategies. But, there is a lack of up-to-date large-scale evaluations of their transferability to medical imaging, posing a challenge for practitioners in selecting the most proper pretrained models for their tasks at hand. To fill this gap, we conduct a comprehensive systematic study, focusing on (i) benchmarking numerous conventional and modern convolutional neural network (ConvNet) and vision transformer architectures across various medical tasks; (ii) investigating the impact of fine-tuning data size on the performance of ConvNets compared with vision transformers in medical imaging; (iii) examining the impact of pretraining data granularity on transfer learning performance; (iv) evaluating transferability of a wide range of recent self-supervised methods with diverse training objectives to a variety of medical tasks across different modalities; and (v) delving into the efficacy of domain-adaptive pretraining on both photographic and medical datasets to develop high-performance models for medical tasks. Our large-scale study (similar to 5,000 experiments) yields impactful insights: (1) ConvNets demonstrate higher transferability than vision transformers when fine-tuning for medical tasks; (2) ConvNets prove to be more annotation efficient than vision transformers when fine-tuning for medical tasks; (3) Fine-grained representations, rather than high-level semantic features, prove pivotal for fine-grained medical tasks; (4) Self-supervised models excel in learning holistic features compared with supervised models; and (5) Domain-adaptive pretraining leads to performant models via harnessing knowledge acquired from ImageNet and enhancing it through the utilization of readily accessible expert annotations associated with medical datasets. As open science, all codes and pretrained models are available at GitHub.com/JLiangLab/BenchmarkTransferLearning (Version 2).
引用
收藏
页数:24
相关论文
共 50 条
  • [31] Benchmarking a large-scale FIR dataset for on-road pedestrian detection
    Xu, Zhewei
    Zhuang, Jiajun
    Liu, Qiong
    Zhou, Jingkai
    Peng, Shaowu
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2019, 96 : 199 - 208
  • [32] Not-so-supervised: A survey of semi-supervised, multi-instance, and transfer learning in medical image analysis
    Cheplygina, Veronika
    de Bruijne, Marleen
    Pluim, Josien P. W.
    [J]. MEDICAL IMAGE ANALYSIS, 2019, 54 : 280 - 296
  • [33] Deep learning based data augmentation for large-scale mineral image recognition and classification
    Liu, Yang
    Wang, Xueyi
    Zhang, Zelin
    Deng, Fang
    [J]. MINERALS ENGINEERING, 2023, 204
  • [34] Large-scale semantic web image retrieval using bimodal deep learning techniques
    Huang, Changqin
    Xu, Haijiao
    Xie, Liang
    Zhu, Jia
    Xu, Chunyan
    Tang, Yong
    [J]. INFORMATION SCIENCES, 2018, 430 : 331 - 348
  • [35] Breast Cancer Medical Image Analysis Based on Transfer Learning Model
    Liu, Yi
    Zhang, Xiaolong
    [J]. INTELLIGENT COMPUTING THEORIES AND APPLICATION, PT II, 2018, 10955 : 44 - 53
  • [36] Large-scale cellular traffic prediction based on graph convolutional networks with transfer learning
    Zhou, Xu
    Zhang, Yong
    Li, Zhao
    Wang, Xing
    Zhao, Juan
    Zhang, Zhao
    [J]. NEURAL COMPUTING & APPLICATIONS, 2022, 34 (07) : 5549 - 5559
  • [37] Large-scale cellular traffic prediction based on graph convolutional networks with transfer learning
    Xu Zhou
    Yong Zhang
    Zhao Li
    Xing Wang
    Juan Zhao
    Zhao Zhang
    [J]. Neural Computing and Applications, 2022, 34 : 5549 - 5559
  • [38] RC-TL: Reinforcement Convolutional Transfer Learning for Large-scale Trajectory Prediction
    Emami, Negar
    Pacheco, Lucas
    Di Maio, Antonio
    Braun, Torsten
    [J]. PROCEEDINGS OF THE IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM 2022, 2022,
  • [39] Classification of Large-Scale High-Resolution SAR Images With Deep Transfer Learning
    Huang, Zhongling
    Dumitru, Corneliu Octavian
    Pan, Zongxu
    Lei, Bin
    Datcu, Mihai
    [J]. IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2021, 18 (01) : 107 - 111
  • [40] Efficient KPI Anomaly Detection Through Transfer Learning for Large-Scale Web Services
    Zhang, Shenglin
    Zhong, Zhenyu
    Li, Dongwen
    Fan, Qiliang
    Sun, Yongqian
    Zhu, Man
    Zhang, Yuzhi
    Pei, Dan
    Sun, Jiyan
    Liu, Yinlong
    Yang, Hui
    Zou, Yongqiang
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 2022, 40 (08) : 2440 - 2455