Transferability of Non-contrastive Self-supervised Learning to Chronic Wound Image Recognition

被引:0
作者
Akay, Julien Marteen [1 ]
Schenck, Wolfram [1 ]
机构
[1] Bielefeld Univ Appl Sci & Arts, D-33619 Bielefeld, Germany
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VIII | 2024年 / 15023卷
关键词
Non-contrastive self-supervised learning; Convolutional neural networks; Deep learning; Transfer learning; Fine-tuning; Wound image recognition; SEGMENTATION;
D O I
10.1007/978-3-031-72353-7_31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Chronic wounds pose significant challenges in medical practice, necessitating effective treatment approaches and reduced burden on healthcare staff. Computer-aided diagnosis (CAD) systems offer promising solutions to enhance treatment outcomes. However, the effective processing of wound images remains a challenge. Deep learning models, particularly convolutional neural networks (CNNs), have demonstrated proficiency in this task, typically relying on extensive labeled data for optimal generalization. Given the limited availability of medical images, a common approach involves pretraining models on data-rich tasks to transfer that knowledge as a prior to the main task, compensating for the lack of labeled wound images. In this study, we investigate the transferability of CNNs pretrained with non-contrastive self-supervised learning (SSL) to enhance generalization in chronic wound image recognition. Our findings indicate that leveraging non-contrastive SSL methods in conjunction with ConvNeXt models yields superior performance compared to other work's multimodal models that additionally benefit from affected body part location data. Furthermore, analysis using Grad-CAM reveals that ConvNeXt models pretrained with VICRegL exhibit improved focus on relevant wound properties compared to the conventional approach of ResNet-50 models pretrained with ImageNet classification. These results underscore the crucial role of the appropriate combination of pretraining method and model architecture in effectively addressing limited wound data settings. Among the various approaches explored, ConvNeXt-XL pretrained by VICRegL emerges as a reliable and stable method. This study makes a novel contribution by demonstrating the effectiveness of latest non-contrastive SSL-based transfer learning in advancing the field of chronic wound image recognition.
引用
收藏
页码:427 / 444
页数:18
相关论文
共 50 条
  • [41] Self-supervised representation learning for surgical activity recognition
    Daniel Paysan
    Luis Haug
    Michael Bajka
    Markus Oelhafen
    Joachim M. Buhmann
    [J]. International Journal of Computer Assisted Radiology and Surgery, 2021, 16 : 2037 - 2044
  • [42] Improved transferability of self-supervised learning models through batch normalization finetuning
    Sirotkin, Kirill
    Escudero-Vinolo, Marcos
    Carballeira, Pablo
    Garcia-Martin, Alvaro
    [J]. APPLIED INTELLIGENCE, 2024, 54 (22) : 11281 - 11294
  • [43] Dual-stream Multiple Instance Learning Network for Whole Slide Image Classification with Self-supervised Contrastive Learning
    Li, Bin
    Li, Yin
    Eliceiri, Kevin W.
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 14313 - 14323
  • [44] Enhancing breast cancer classification via histopathological image analysis: Leveraging self-supervised contrastive learning and transfer learning
    Bin Ashraf, Faisal
    Alam, S. M. Maksudul
    Sakib, Shahriar M.
    [J]. HELIYON, 2024, 10 (02)
  • [45] SELF-SUPERVISED DEEP LEARNING FOR FISHEYE IMAGE RECTIFICATION
    Chao, Chun-Hao
    Hsu, Pin-Lun
    Lee, Hung-Yi
    Wang, Yu-Chiang Frank
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 2248 - 2252
  • [46] Contrastive Learning with Cross-Part Bidirectional Distillation for Self-supervised Skeleton-Based Action Recognition
    Yang, Huaigang
    Zhang, Qieshi
    Ren, Ziliang
    Yuan, Huaqiang
    Zhang, Fuyong
    [J]. HUMAN-CENTRIC COMPUTING AND INFORMATION SCIENCES, 2024, 14
  • [47] Remote sensing image intelligent interpretation: from supervised learning to self-supervised learning
    Tao C.
    Yin Z.
    Zhu Q.
    Li H.
    [J]. Cehui Xuebao/Acta Geodaetica et Cartographica Sinica, 2021, 50 (08): : 1122 - 1134
  • [48] Prediction of freezing of gait based on self-supervised pretraining via contrastive learning
    Xia, Yi
    Sun, Hua
    Zhang, Baifu
    Xu, Yangyang
    Ye, Qiang
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 89
  • [49] Self-Supervised Spectral-Level Contrastive Learning for Hyperspectral Target Detection
    Wang, Yulei
    Chen, Xi
    Zhao, Enyu
    Song, Meiping
    [J]. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [50] Parts2Whole: Self-supervised Contrastive Learning via Reconstruction
    Feng, Ruibin
    Zhou, Zongwei
    Gotway, Michael B.
    Liang, Jianming
    [J]. DOMAIN ADAPTATION AND REPRESENTATION TRANSFER, AND DISTRIBUTED AND COLLABORATIVE LEARNING, DART 2020, DCL 2020, 2020, 12444 : 85 - 95