Transferability of Non-contrastive Self-supervised Learning to Chronic Wound Image Recognition

被引:0
|
作者
Akay, Julien Marteen [1 ]
Schenck, Wolfram [1 ]
机构
[1] Bielefeld Univ Appl Sci & Arts, D-33619 Bielefeld, Germany
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT VIII | 2024年 / 15023卷
关键词
Non-contrastive self-supervised learning; Convolutional neural networks; Deep learning; Transfer learning; Fine-tuning; Wound image recognition; SEGMENTATION;
D O I
10.1007/978-3-031-72353-7_31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Chronic wounds pose significant challenges in medical practice, necessitating effective treatment approaches and reduced burden on healthcare staff. Computer-aided diagnosis (CAD) systems offer promising solutions to enhance treatment outcomes. However, the effective processing of wound images remains a challenge. Deep learning models, particularly convolutional neural networks (CNNs), have demonstrated proficiency in this task, typically relying on extensive labeled data for optimal generalization. Given the limited availability of medical images, a common approach involves pretraining models on data-rich tasks to transfer that knowledge as a prior to the main task, compensating for the lack of labeled wound images. In this study, we investigate the transferability of CNNs pretrained with non-contrastive self-supervised learning (SSL) to enhance generalization in chronic wound image recognition. Our findings indicate that leveraging non-contrastive SSL methods in conjunction with ConvNeXt models yields superior performance compared to other work's multimodal models that additionally benefit from affected body part location data. Furthermore, analysis using Grad-CAM reveals that ConvNeXt models pretrained with VICRegL exhibit improved focus on relevant wound properties compared to the conventional approach of ResNet-50 models pretrained with ImageNet classification. These results underscore the crucial role of the appropriate combination of pretraining method and model architecture in effectively addressing limited wound data settings. Among the various approaches explored, ConvNeXt-XL pretrained by VICRegL emerges as a reliable and stable method. This study makes a novel contribution by demonstrating the effectiveness of latest non-contrastive SSL-based transfer learning in advancing the field of chronic wound image recognition.
引用
收藏
页码:427 / 444
页数:18
相关论文
共 50 条
  • [1] Non-Contrastive Self-Supervised Learning for Utterance-Level Information Extraction From Speech
    Cho, Jaejin
    Villalba, Jesus
    Moro-Velazquez, Laureano
    Dehak, Najim
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1284 - 1295
  • [2] Self-supervised contrastive learning on agricultural images
    Guldenring, Ronja
    Nalpantidis, Lazaros
    COMPUTERS AND ELECTRONICS IN AGRICULTURE, 2021, 191
  • [3] Similarity contrastive estimation for image and video soft contrastive self-supervised learning
    Denize, Julien
    Rabarisoa, Jaonary
    Orcesi, Astrid
    Herault, Romain
    MACHINE VISION AND APPLICATIONS, 2023, 34 (06)
  • [4] Similarity contrastive estimation for image and video soft contrastive self-supervised learning
    Julien Denize
    Jaonary Rabarisoa
    Astrid Orcesi
    Romain Hérault
    Machine Vision and Applications, 2023, 34
  • [5] Self-Supervised Learning: Generative or Contrastive
    Liu, Xiao
    Zhang, Fanjin
    Hou, Zhenyu
    Mian, Li
    Wang, Zhaoyu
    Zhang, Jing
    Tang, Jie
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (01) : 857 - 876
  • [6] A Survey on Contrastive Self-Supervised Learning
    Jaiswal, Ashish
    Babu, Ashwin Ramesh
    Zadeh, Mohammad Zaki
    Banerjee, Debapriya
    Makedon, Fillia
    TECHNOLOGIES, 2021, 9 (01)
  • [7] DimCL: Dimensional Contrastive Learning for Improving Self-Supervised Learning
    Nguyen, Thanh
    Pham, Trung Xuan
    Zhang, Chaoning
    Luu, Tung M.
    Vu, Thang
    Yoo, Chang D.
    IEEE ACCESS, 2023, 11 : 21534 - 21545
  • [8] Rethinking Pseudo-Labeling for Semi-Supervised Facial Expression Recognition With Contrastive Self-Supervised Learning
    Fang, Bei
    Li, Xian
    Han, Guangxin
    He, Juhou
    IEEE ACCESS, 2023, 11 : 45547 - 45558
  • [9] Contrastive self-supervised learning for diabetic retinopathy early detection
    Jihong Ouyang
    Dong Mao
    Zeqi Guo
    Siguang Liu
    Dong Xu
    Wenting Wang
    Medical & Biological Engineering & Computing, 2023, 61 : 2441 - 2452
  • [10] Contrastive self-supervised learning for diabetic retinopathy early detection
    Ouyang, Jihong
    Mao, Dong
    Guo, Zeqi
    Liu, Siguang
    Xu, Dong
    Wang, Wenting
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2023, 61 (09) : 2441 - 2452