Large-scale benchmarking and boosting transfer learning for medical image analysis

被引:0
|
作者
Taher, Mohammad Reza Hosseinzadeh [1 ]
Haghighi, Fatemeh [1 ]
Gotway, Michael B. [2 ]
Liang, Jianming [1 ]
机构
[1] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ 85281 USA
[2] Mayo Clin, Dept Radiol, Scottsdale, AZ 85259 USA
基金
美国国家科学基金会;
关键词
Benchmarking; Transfer Learning; CNNs and Vision transformer; Medical Imaging; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1016/j.media.2025.103487
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer learning, particularly fine-tuning models pretrained on photographic images to medical images, has proven indispensable for medical image analysis. There are numerous models with distinct architectures pretrained on various datasets using different strategies. But, there is a lack of up-to-date large-scale evaluations of their transferability to medical imaging, posing a challenge for practitioners in selecting the most proper pretrained models for their tasks at hand. To fill this gap, we conduct a comprehensive systematic study, focusing on (i) benchmarking numerous conventional and modern convolutional neural network (ConvNet) and vision transformer architectures across various medical tasks; (ii) investigating the impact of fine-tuning data size on the performance of ConvNets compared with vision transformers in medical imaging; (iii) examining the impact of pretraining data granularity on transfer learning performance; (iv) evaluating transferability of a wide range of recent self-supervised methods with diverse training objectives to a variety of medical tasks across different modalities; and (v) delving into the efficacy of domain-adaptive pretraining on both photographic and medical datasets to develop high-performance models for medical tasks. Our large-scale study (similar to 5,000 experiments) yields impactful insights: (1) ConvNets demonstrate higher transferability than vision transformers when fine-tuning for medical tasks; (2) ConvNets prove to be more annotation efficient than vision transformers when fine-tuning for medical tasks; (3) Fine-grained representations, rather than high-level semantic features, prove pivotal for fine-grained medical tasks; (4) Self-supervised models excel in learning holistic features compared with supervised models; and (5) Domain-adaptive pretraining leads to performant models via harnessing knowledge acquired from ImageNet and enhancing it through the utilization of readily accessible expert annotations associated with medical datasets. As open science, all codes and pretrained models are available at GitHub.com/JLiangLab/BenchmarkTransferLearning (Version 2).
引用
收藏
页数:24
相关论文
共 50 条
  • [11] Fuzzy-ViT: A Deep Neuro-Fuzzy System for Cross-Domain Transfer Learning From Large-Scale General Data to Medical Image
    Li, Qiankun
    Wang, Yimou
    Zhang, Yani
    Zuo, Zhaoyu
    Chen, Junxin
    Wang, Wei
    IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2025, 33 (01) : 231 - 241
  • [12] Generating Large-Scale Heterogeneous Graphs for Benchmarking
    Gupta, Amarnath
    SPECIFYING BIG DATA BENCHMARKS, 2014, 8163 : 113 - 128
  • [13] Attribute annotation on large-scale image database by active knowledge transfer
    Jiang, Huajie
    Wang, Ruiping
    Li, Yan
    Liu, Haomiao
    Shan, Shiguang
    Chen, Xilin
    IMAGE AND VISION COMPUTING, 2018, 78 : 1 - 13
  • [14] Benchmarking large-scale data management for Internet of Things
    Hendawi, Abdeltawab
    Gupta, Jayant
    Liu, Jiayi
    Teredesai, Ankur
    Ramakrishnan, Naveen
    Shah, Mohak
    El-Sappagh, Shaker
    Kwak, Kyung-Sup
    Ali, Mohamed
    JOURNAL OF SUPERCOMPUTING, 2019, 75 (12) : 8207 - 8230
  • [15] Benchmarking large-scale data management for Internet of Things
    Abdeltawab Hendawi
    Jayant Gupta
    Jiayi Liu
    Ankur Teredesai
    Naveen Ramakrishnan
    Mohak Shah
    Shaker El-Sappagh
    Kyung-Sup Kwak
    Mohamed Ali
    The Journal of Supercomputing, 2019, 75 : 8207 - 8230
  • [16] A comprehensive survey of deep learning research on medical image analysis with focus on transfer learning
    Atasever, Sema
    Azginoglu, Nuh
    Terzi, Duygu Sinanc
    Terzi, Ramazan
    CLINICAL IMAGING, 2023, 94 : 18 - 41
  • [17] Large-scale Pollen Recognition with Deep Learning
    de Geus, Andre R.
    Barcelos, Celia A. Z.
    Batista, Marcos A.
    da Silva, Sergio F.
    2019 27TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2019,
  • [18] Large-scale Landsat image classification based on deep learning methods
    Zhao, Xuemei
    Gao, Lianru
    Chen, Zhengchao
    Zhang, Bing
    Liao, Wenzhi
    APSIPA TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING, 2019, 8
  • [19] Efficient Distributed Transfer Learning for Large-Scale Gaussian Graphic Models
    Zhou, Xingcai
    Zheng, Haotian
    Zhang, Haoran
    Huang, Chao
    STAT, 2024, 13 (04):
  • [20] Large-scale Internet benchmarking: Technology and application in warehousing operations
    Johnson, Andy
    Chen, Wen-Chih
    McGinnis, Leon F.
    COMPUTERS IN INDUSTRY, 2010, 61 (03) : 280 - 286