Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation

被引:81
作者
Chaitanya, Krishna [1 ]
Erdil, Ertunc [1 ]
Karani, Neerav [1 ]
Konukoglu, Ender [1 ]
机构
[1] Swiss Fed Inst Technol, Comp Vis Lab, Sternwartstr 7, CH-8092 Zurich, Switzerland
关键词
Contrastive learning; Self-supervised learning; Semi-supervised learning; Machine learning; Deep learning; Medical image segmentation;
D O I
10.1016/j.media.2023.102792
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Supervised deep learning-based methods yield accurate results for medical image segmentation. However, they require large labeled datasets for this, and obtaining them is a laborious task that requires clinical expertise. Semi/self-supervised learning-based approaches address this limitation by exploiting unlabeled data along with limited annotated data. Recent self-supervised learning methods use contrastive loss to learn good global level representations from unlabeled images and achieve high performance in classification tasks on popular natural image datasets like ImageNet. In pixel-level prediction tasks such as segmentation, it is crucial to also learn good local level representations along with global representations to achieve better accuracy. However, the impact of the existing local contrastive loss-based methods remains limited for learning good local representations because similar and dissimilar local regions are defined based on random augmentations and spatial proximity; not based on the semantic label of local regions due to lack of large-scale expert annotations in the semi/self-supervised setting. In this paper, we propose a local contrastive loss to learn good pixel level features useful for segmentation by exploiting semantic label information obtained from pseudo-labels of unlabeled images alongside limited annotated images with ground truth (GT) labels. In particular, we define the proposed contrastive loss to encourage similar representations for the pixels that have the same pseudo-label/GT label while being dissimilar to the representation of pixels with different pseudo-label/GT label in the dataset. We perform pseudo-label based self-training and train the network by jointly optimizing the proposed contrastive loss on both labeled and unlabeled sets and segmentation loss on only the limited labeled set. We evaluated the proposed approach on three public medical datasets of cardiac and prostate anatomies, and obtain high segmentation performance with a limited labeled set of one or two 3D volumes. Extensive comparisons with the state-of-the-art semi-supervised and data augmentation methods and concurrent contrastive learning methods demonstrate the substantial improvement achieved by the proposed method. The code is made publicly available at https://github.com/krishnabits001/pseudo_label_contrastive_ training.
引用
收藏
页数:12
相关论文
共 114 条
  • [51] Assessing Reliability and Challenges of Uncertainty Estimations for Medical Image Segmentation
    Jungo, Alain
    Reyes, Mauricio
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT II, 2019, 11765 : 48 - 56
  • [52] DeepMedic for Brain Tumor Segmentation
    Kamnitsas, Konstantinos
    Ferrante, Enzo
    Parisot, Sarah
    Ledig, Christian
    Nori, Aditya V.
    Criminisi, Antonio
    Rueckert, Daniel
    Glocker, Ben
    [J]. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, 2016, 2016, 10154 : 138 - 149
  • [53] Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation
    Kamnitsas, Konstantinos
    Ledig, Christian
    Newcombe, Virginia F. J.
    Sirnpson, Joanna P.
    Kane, Andrew D.
    Menon, David K.
    Rueckert, Daniel
    Glocker, Ben
    [J]. MEDICAL IMAGE ANALYSIS, 2017, 36 : 61 - 78
  • [54] Kang Fang, 2020, Medical Image Computing and Computer Assisted Intervention - MICCAI 2020. 23rd International Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12261), P532, DOI 10.1007/978-3-030-59710-8_52
  • [55] Kendall Alex, 2017, ADV NEURAL INFORM PR, V30
  • [56] Kingma Diederik P., 2015, ICLR POSTER
  • [57] Laine S, 2017, Arxiv, DOI arXiv:1610.02242
  • [58] Lee D-H, 2013, WORKSH CHALL REPR LE, V3, P896, DOI DOI 10.1109/IEDM.2013.6724604
  • [59] Li X, 2020, IEEE Transactions on Neural Networks and Learning Systems
  • [60] Luo XD, 2021, AAAI CONF ARTIF INTE, V35, P8801