Self-Supervised Representation Distribution Learning for Reliable Data Augmentation in Histopathology WSI Classification

被引:0
|
作者
Tang, Kunming [1 ,2 ,3 ]
Jiang, Zhiguo [1 ,2 ,3 ]
Wu, Kun [1 ,2 ,3 ]
Shi, Jun [4 ]
Xie, Fengying [1 ,2 ,3 ]
Wang, Wei [5 ,6 ]
Wu, Haibo [5 ,6 ]
Zheng, Yushan [2 ]
机构
[1] Beihang Univ, Image Proc Ctr, Sch Astronaut, Sch Engn Med, Beijing 100191, Peoples R China
[2] Beihang Univ, Beijing Adv Innovat Ctr Biomed Engn, Sch Engn Med, Beijing 100191, Peoples R China
[3] Tianmushan Lab, Hangzhou 311115, Peoples R China
[4] Hefei Univ Technol, Sch Software, Hefei 230009, Peoples R China
[5] Univ Sci & Technol China USTC, Affiliated Hosp USTC 1, Dept Pathol, Hefei 230036, Peoples R China
[6] Univ Sci & Technol China USTC, Intelligent Pathol Inst, Div Life Sci & Med, Hefei 230036, Peoples R China
基金
中国国家自然科学基金; 北京市自然科学基金;
关键词
Data augmentation; Training; Representation learning; Data models; Histopathology; Feature extraction; Supervised learning; Self-supervised representation learning; data augmentation; WSI classification; ARTIFICIAL-INTELLIGENCE;
D O I
10.1109/TMI.2024.3447672
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Multiple instance learning (MIL) based whole slide image (WSI) classification is often carried out on the representations of patches extracted from WSI with a pre-trained patch encoder. The performance of classification relies on both patch-level representation learning and MIL classifier training. Most MIL methods utilize a frozen model pre-trained on ImageNet or a model trained with self-supervised learning on histopathology image dataset to extract patch image representations and then fix these representations in the training of the MIL classifiers for efficiency consideration. However, the invariance of representations cannot meet the diversity requirement for training a robust MIL classifier, which has significantly limited the performance of the WSI classification. In this paper, we propose a Self-Supervised Representation Distribution Learning framework (SSRDL) for patch-level representation learning with an online representation sampling strategy (ORS) for both patch feature extraction and WSI-level data augmentation. The proposed method was evaluated on three datasets under three MIL frameworks. The experimental results have demonstrated that the proposed method achieves the best performance in histopathology image representation learning and data augmentation and outperforms state-of-the-art methods under different WSI classification frameworks. The code is available at https://github.com/lazytkm/SSRDL.
引用
收藏
页码:462 / 474
页数:13
相关论文
共 50 条
  • [1] Self-supervised learning with automatic data augmentation for enhancing representation
    Park, Chanjong
    Kim, Eunwoo
    PATTERN RECOGNITION LETTERS, 2024, 184 : 133 - 139
  • [2] ViewMix: Augmentation for Robust Representation in Self-Supervised Learning
    Das, Arjon
    Zhong, Xin
    IEEE ACCESS, 2024, 12 : 8461 - 8470
  • [3] Joint data and feature augmentation for self-supervised representation learning on point clouds
    Lu, Zhuheng
    Dai, Yuewei
    Li, Weiqing
    Su, Zhiyong
    GRAPHICAL MODELS, 2023, 129
  • [4] Self-Supervised Action Representation Learning Based on Asymmetric Skeleton Data Augmentation
    Zhou, Hualing
    Li, Xi
    Xu, Dahong
    Liu, Hong
    Guo, Jianping
    Zhang, Yihan
    SENSORS, 2022, 22 (22)
  • [5] Self-Supervised Graph Representation Learning Method Based on Data and Feature Augmentation
    Xu, Yunfeng
    Fan, Hexun
    Computer Engineering and Applications, 2024, 60 (17) : 148 - 157
  • [6] HistoSSL: Self-Supervised Representation Learning for Classifying Histopathology Images
    Jin, Xu
    Huang, Teng
    Wen, Ke
    Chi, Mengxian
    An, Hong
    MATHEMATICS, 2023, 11 (01)
  • [7] Augmentation Adversarial Training for Self-Supervised Speaker Representation Learning
    Kang, Jingu
    Huh, Jaesung
    Heo, Hee Soo
    Chung, Joon Son
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1253 - 1262
  • [8] Self-Supervised Representation Learning for Document Image Classification
    Siddiqui, Shoaib Ahmed
    Dengel, Andreas
    Ahmed, Sheraz
    IEEE ACCESS, 2021, 9 : 164358 - 164367
  • [9] Automatic Data Augmentation Selection and Parametrization in Contrastive Self-Supervised Speech Representation Learning
    Zaiem, Salah
    Parcollet, Titouan
    Essid, Slim
    INTERSPEECH 2022, 2022, : 669 - 673
  • [10] COSDA: Covariance regularized semantic data augmentation for self-supervised visual representation learning
    Chen, Hui
    Ma, Yongqiang
    Jiang, Jingjing
    Zheng, Nanning
    KNOWLEDGE-BASED SYSTEMS, 2025, 311