A knowledge-based learning framework for self-supervised pre-training towards enhanced recognition of biomedical microscopy images

被引:9
|
作者
Chen, Wei [1 ]
Li, Chen [1 ]
Chen, Dan [2 ]
Luo, Xin [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp, Changsha 410073, Peoples R China
[2] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Self -supervised neural network; Biomedical microscopy images; Classification; Segmentation; Generative learning; Contrastive learning; pre-training; UNCERTAINTY QUANTIFICATION;
D O I
10.1016/j.neunet.2023.09.001
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Self-supervised pre-training has become the priory choice to establish reliable neural networks for automated recognition of massive biomedical microscopy images, which are routinely annotationfree, without semantics, and without guarantee of quality. Note that this paradigm is still at its infancy and limited by closely related open issues: (1) how to learn robust representations in an unsupervised manner from unlabeled biomedical microscopy images of low diversity in samples? and (2) how to obtain the most significant representations demanded by a high-quality segmentation? Aiming at these issues, this study proposes a knowledge-based learning framework (TOWER) towards enhanced recognition of biomedical microscopy images, which works in three phases by synergizing contrastive learning and generative learning methods: (1) Sample Space Diversification: Reconstructive proxy tasks have been enabled to embed a priori knowledge with context highlighted to diversify the expanded sample space; (2) Enhanced Representation Learning: Informative noise-contrastive estimation loss regularizes the encoder to enhance representation learning of annotation-free images; (3) Correlated Optimization: Optimization operations in pre-training the encoder and the decoder have been correlated via image restoration from proxy tasks, targeting the need for semantic segmentation. Experiments have been conducted on public datasets of biomedical microscopy images against the state-of-the-art counterparts (e.g., SimCLR and BYOL), and results demonstrate that: TOWER statistically excels in all self-supervised methods, achieving a Dice improvement of 1.38 percentage points over SimCLR. TOWER also has potential in multi-modality medical image analysis and enables label-efficient semi-supervised learning, e.g., reducing the annotation cost by up to 99% in pathological classification. (c) 2023 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-nc/4.0/).
引用
收藏
页码:810 / 826
页数:17
相关论文
共 50 条
  • [31] Multi-modal cross-domain self-supervised pre-training for fMRI and EEG fusion
    Wei, Xinxu
    Zhao, Kanhao
    Jiao, Yong
    Carlisle, Nancy B.
    Xie, Hua
    Fonzo, Gregory A.
    Zhang, Yu
    NEURAL NETWORKS, 2025, 184
  • [32] Improving Seismic Fault Recognition with Self-Supervised Pre-Training: A Study of 3D Transformer-Based with Multi-Scale Decoding and Fusion
    Zhang, Zeren
    Chen, Ran
    Ma, Jinwen
    REMOTE SENSING, 2024, 16 (05)
  • [33] Self-supervised deep-learning segmentation of corneal endothelium specular microscopy images
    Sanchez, Sergio
    Mendoza, Kevin
    Quintero, Fernando J.
    Prada, Angelica M.
    Tello, Alejandro
    Galvis, Virgilio
    Romero, Lenny A.
    Marrugo, Andres G.
    2023 IEEE COLOMBIAN CONFERENCE ON APPLICATIONS OF COMPUTATIONAL INTELLIGENCE, COLCACI, 2023,
  • [34] Detection of Changes in Buildings in Remote Sensing Images via Self-Supervised Contrastive Pre-Training and Historical Geographic Information System Vector Maps
    Feng, Wenqing
    Guan, Fangli
    Tu, Jihui
    Sun, Chenhao
    Xu, Wei
    REMOTE SENSING, 2023, 15 (24)
  • [35] DDDG: A dual bi-directional knowledge distillation method with generative self-supervised pre-training and its hardware implementation on SoC for ECG
    Zhang, Huaicheng
    Liu, Wenhan
    Guo, Qianxi
    Shi, Jiguang
    Chang, Sheng
    Wang, Hao
    He, Jin
    Huang, Qijun
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 244
  • [36] Less is More: Selective reduction of CT data for self-supervised pre-training of deep learning models with contrastive learning improves downstream classification performance
    Wolf, Daniel
    Payer, Tristan
    Lisson, Catharina Silvia
    Lisson, Christoph Gerhard
    Beer, Meinrad
    Götz, Michael
    Ropinski, Timo
    Computers in Biology and Medicine, 2024, 183
  • [37] CETP: A novel semi-supervised framework based on contrastive pre-training for imbalanced encrypted traffic classification
    Lin, Xinjie
    He, Longtao
    Gou, Gaopeng
    Yu, Jing
    Guan, Zhong
    Li, Xiang
    Guo, Juncheng
    Xiong, Gang
    COMPUTERS & SECURITY, 2024, 143
  • [38] A self-supervised pre-training scheme for multi-source heterogeneous remote sensing image land cover classification
    Xue Z.
    Yu X.
    Liu J.
    Yang G.
    Liu B.
    Yu A.
    Zhou J.
    Jin S.
    Cehui Xuebao/Acta Geodaetica et Cartographica Sinica, 2024, 53 (03): : 512 - 525
  • [39] Classification of Ground-Based Cloud Images by Contrastive Self-Supervised Learning
    Lv, Qi
    Li, Qian
    Chen, Kai
    Lu, Yao
    Wang, Liwen
    REMOTE SENSING, 2022, 14 (22)
  • [40] Feature-Differencing-Based Self-Supervised Pre-Training for Land-Use/Land-Cover Change Detection in High-Resolution Remote Sensing Images
    Feng, Wenqing
    Guan, Fangli
    Sun, Chenhao
    Xu, Wei
    LAND, 2024, 13 (07)