Large-scale benchmarking and boosting transfer learning for medical image analysis

被引:0
作者
Taher, Mohammad Reza Hosseinzadeh [1 ]
Haghighi, Fatemeh [1 ]
Gotway, Michael B. [2 ]
Liang, Jianming [1 ]
机构
[1] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ 85281 USA
[2] Mayo Clin, Dept Radiol, Scottsdale, AZ 85259 USA
基金
美国国家科学基金会;
关键词
Benchmarking; Transfer Learning; CNNs and Vision transformer; Medical Imaging; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1016/j.media.2025.103487
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer learning, particularly fine-tuning models pretrained on photographic images to medical images, has proven indispensable for medical image analysis. There are numerous models with distinct architectures pretrained on various datasets using different strategies. But, there is a lack of up-to-date large-scale evaluations of their transferability to medical imaging, posing a challenge for practitioners in selecting the most proper pretrained models for their tasks at hand. To fill this gap, we conduct a comprehensive systematic study, focusing on (i) benchmarking numerous conventional and modern convolutional neural network (ConvNet) and vision transformer architectures across various medical tasks; (ii) investigating the impact of fine-tuning data size on the performance of ConvNets compared with vision transformers in medical imaging; (iii) examining the impact of pretraining data granularity on transfer learning performance; (iv) evaluating transferability of a wide range of recent self-supervised methods with diverse training objectives to a variety of medical tasks across different modalities; and (v) delving into the efficacy of domain-adaptive pretraining on both photographic and medical datasets to develop high-performance models for medical tasks. Our large-scale study (similar to 5,000 experiments) yields impactful insights: (1) ConvNets demonstrate higher transferability than vision transformers when fine-tuning for medical tasks; (2) ConvNets prove to be more annotation efficient than vision transformers when fine-tuning for medical tasks; (3) Fine-grained representations, rather than high-level semantic features, prove pivotal for fine-grained medical tasks; (4) Self-supervised models excel in learning holistic features compared with supervised models; and (5) Domain-adaptive pretraining leads to performant models via harnessing knowledge acquired from ImageNet and enhancing it through the utilization of readily accessible expert annotations associated with medical datasets. As open science, all codes and pretrained models are available at GitHub.com/JLiangLab/BenchmarkTransferLearning (Version 2).
引用
收藏
页数:24
相关论文
共 50 条
  • [21] On-the-fly learning for visual search of large-scale image and video datasets
    Chatfield, Ken
    Arandjelovic, Relja
    Parkhi, Omkar
    Zisserman, Andrew
    INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL, 2015, 4 (02) : 75 - 93
  • [22] Deep Learning in Medical Image Analysis
    Chan, Heang-Ping
    Samala, Ravi K.
    Hadjiiski, Lubomir M.
    Zhou, Chuan
    DEEP LEARNING IN MEDICAL IMAGE ANALYSIS: CHALLENGES AND APPLICATIONS, 2020, 1213 : 3 - 21
  • [23] SVM ensemble based transfer learning for large-scale membrane proteins discrimination
    Mei, Suyu
    JOURNAL OF THEORETICAL BIOLOGY, 2014, 340 : 105 - 110
  • [24] Large-Scale Hierarchical Medical Image Retrieval Based on a Multilevel Convolutional Neural Network
    Lo, Chung-Ming
    Hsieh, Cheng-Yeh
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024,
  • [25] Transfer learning for medical image classification: a literature review
    Kim, Hee E.
    Cosa-Linan, Alejandro
    Santhanam, Nandhini
    Jannesari, Mahboubeh
    Maros, Mate E.
    Ganslandt, Thomas
    BMC MEDICAL IMAGING, 2022, 22 (01)
  • [26] An Investigation of Transfer Learning Approaches to Overcome Limited Labeled Data in Medical Image Analysis
    Chae, Jinyeong
    Kim, Jihie
    APPLIED SCIENCES-BASEL, 2023, 13 (15):
  • [27] Large-Scale Whale-Call Classification by Transfer Learning on Multi-Scale Waveforms and Time-Frequency Features
    Zhang, Lilun
    Wang, Dezhi
    Bao, Changchun
    Wang, Yongxian
    Xu, Kele
    APPLIED SCIENCES-BASEL, 2019, 9 (05):
  • [28] Efficient Large Scale Medical Image Dataset Preparation for Machine Learning Applications
    Denner, Stefan
    Scherer, Jonas
    Kades, Klaus
    Bounias, Dimitrios
    Schader, Philipp
    Kausch, Lisa
    Bujotzek, Markus
    Bucher, Andreas Michael
    Penzkofer, Tobias
    Maier-Hein, Klaus
    DATA ENGINEERING IN MEDICAL IMAGING, DEMI 2023, 2023, 14314 : 46 - 55
  • [29] Contrastive Learning Meets Transfer Learning: A Case Study In Medical Image Analysis
    Lu, Yuzhe
    Jha, Aadarsh
    Deng, Ruining
    Huo, Yuankai
    MEDICAL IMAGING 2022: COMPUTER-AIDED DIAGNOSIS, 2022, 12033
  • [30] Benchmarking Optimizers in Transfer Learning for Automated Weed Image Recognition
    Kumari, Deepika
    Singh, Santosh Kumar
    Katira, Sanjay Subhash
    Srinivas, Inumarthi, V
    Salunkhe, Uday
    METALLURGICAL & MATERIALS ENGINEERING, 2025, 31 (04) : 1 - 10