Large-scale benchmarking and boosting transfer learning for medical image analysis

被引:0
|
作者
Taher, Mohammad Reza Hosseinzadeh [1 ]
Haghighi, Fatemeh [1 ]
Gotway, Michael B. [2 ]
Liang, Jianming [1 ]
机构
[1] Arizona State Univ, Sch Comp & Augmented Intelligence, Tempe, AZ 85281 USA
[2] Mayo Clin, Dept Radiol, Scottsdale, AZ 85259 USA
基金
美国国家科学基金会;
关键词
Benchmarking; Transfer Learning; CNNs and Vision transformer; Medical Imaging; CONVOLUTIONAL NEURAL-NETWORKS;
D O I
10.1016/j.media.2025.103487
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer learning, particularly fine-tuning models pretrained on photographic images to medical images, has proven indispensable for medical image analysis. There are numerous models with distinct architectures pretrained on various datasets using different strategies. But, there is a lack of up-to-date large-scale evaluations of their transferability to medical imaging, posing a challenge for practitioners in selecting the most proper pretrained models for their tasks at hand. To fill this gap, we conduct a comprehensive systematic study, focusing on (i) benchmarking numerous conventional and modern convolutional neural network (ConvNet) and vision transformer architectures across various medical tasks; (ii) investigating the impact of fine-tuning data size on the performance of ConvNets compared with vision transformers in medical imaging; (iii) examining the impact of pretraining data granularity on transfer learning performance; (iv) evaluating transferability of a wide range of recent self-supervised methods with diverse training objectives to a variety of medical tasks across different modalities; and (v) delving into the efficacy of domain-adaptive pretraining on both photographic and medical datasets to develop high-performance models for medical tasks. Our large-scale study (similar to 5,000 experiments) yields impactful insights: (1) ConvNets demonstrate higher transferability than vision transformers when fine-tuning for medical tasks; (2) ConvNets prove to be more annotation efficient than vision transformers when fine-tuning for medical tasks; (3) Fine-grained representations, rather than high-level semantic features, prove pivotal for fine-grained medical tasks; (4) Self-supervised models excel in learning holistic features compared with supervised models; and (5) Domain-adaptive pretraining leads to performant models via harnessing knowledge acquired from ImageNet and enhancing it through the utilization of readily accessible expert annotations associated with medical datasets. As open science, all codes and pretrained models are available at GitHub.com/JLiangLab/BenchmarkTransferLearning (Version 2).
引用
收藏
页数:24
相关论文
共 50 条
  • [1] A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis
    Taher, Mohammad Reza Hosseinzadeh
    Haghighi, Fatemeh
    Feng, Ruibin
    Gotway, Michael B.
    Liang, Jianming
    DOMAIN ADAPTATION AND REPRESENTATION TRANSFER, AND AFFORDABLE HEALTHCARE AND AI FOR RESOURCE DIVERSE GLOBAL HEALTH (DART 2021), 2021, 12968 : 3 - 13
  • [2] Benchmarking and Boosting Transformers for Medical Image Classification
    Ma, DongAo
    Taher, Mohammad Reza Hosseinzadeh
    Pang, Jiaxuan
    Islam, Nahid Ui
    Haghighi, Fatemeh
    Gotway, Michael B.
    Liang, Jianming
    DOMAIN ADAPTATION AND REPRESENTATION TRANSFER (DART 2022), 2022, 13542 : 12 - 22
  • [3] Transfer Learning in Large-Scale Short Text Analysis
    Chu, Yan
    Wang, Zhengkui
    Chen, Man
    Xia, Linlin
    Wei, Fengmei
    Cai, Mengnan
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, KSEM 2015, 2015, 9403 : 499 - 511
  • [4] VizNet: Towards A Large-Scale Visualization Learning and Benchmarking Repository
    Hu, Kevin
    Gaikwad, Snehalkumar 'Neil' S.
    Hulsebos, Madelon
    Bakker, Michiel A.
    Zgraggen, Emanuel
    Hidalgo, Cesar
    Kraska, Tim
    Li, Guoliang
    Satyanarayan, Arvind
    Demiralp, Cagatay
    CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, 2019,
  • [5] Boosting scalability for large-scale multiobjective optimization via transfer weights
    Hong, Haokai
    Jiang, Min
    Yen, Gary G.
    INFORMATION SCIENCES, 2024, 670
  • [6] Transfer learning techniques for medical image analysis: A review
    Kora, Padmavathi
    Ooi, Chui Ping
    Faust, Oliver
    Raghavendra, U.
    Gudigar, Anjan
    Chan, Wai Yee
    Meenakshi, K.
    Swaraja, K.
    Plawiak, Pawel
    Acharya, U. Rajendra
    BIOCYBERNETICS AND BIOMEDICAL ENGINEERING, 2022, 42 (01) : 79 - 107
  • [7] Large-Scale Image Classification Using Active Learning
    Alajlan, Naif
    Pasolli, Edoardo
    Melgani, Farid
    Franzoso, Andrea
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2014, 11 (01) : 259 - 263
  • [8] On the Effective Transfer Learning Strategy for Medical Image Analysis in Deep Learning
    Wen, Yang
    Chen, Leiting
    Zhou, Chuan
    Deng, Yu
    Zeng, Huiru
    Xi, Shuo
    Guo, Rui
    2020 IEEE INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOMEDICINE, 2020, : 827 - 834
  • [9] Learning to Reduce Scale Differences for Large-Scale Invariant Image Matching
    Fu, Yujie
    Zhang, Pengju
    Liu, Bingxi
    Rong, Zheng
    Wu, Yihong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (03) : 1335 - 1348
  • [10] A scoping review of transfer learning research on medical image analysis using ImageNet
    Morid, Mohammad Amin
    Borjali, Alireza
    Del Fiol, Guilherme
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 128 (128)