Learn to Threshold: ThresholdNet With Confidence-Guided Manifold Mixup for Polyp Segmentation

被引:59
作者
Guo, Xiaoqing [1 ]
Yang, Chen [1 ]
Liu, Yajie [2 ]
Yuan, Yixuan [1 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
[2] Peking Univ, Dept Radiat Oncol, Shenzhen Hosp, Shenzhen 518036, Peoples R China
基金
中国国家自然科学基金;
关键词
Image segmentation; Training; Cancer; Feature extraction; Manifolds; Deep learning; Task analysis; Polyp segmentation; CGMMix data augmentation; consistency regularization; ThresholdNet; TMSG module;
D O I
10.1109/TMI.2020.3046843
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The automatic segmentation of polyp in endoscopy images is crucial for early diagnosis and cure of colorectal cancer. Existing deep learning-based methods for polyp segmentation, however, are inadequate due to the limited annotated dataset and the class imbalance problems. Moreover, these methods obtained the final polyp segmentation results by simply thresholding the likelihood maps at an eclectic and equivalent value (often set to 0.5). In this paper, we propose a novel ThresholdNet with a confidence-guided manifold mixup (CGMMix) data augmentation method, mainly for addressing the aforementioned issues in polyp segmentation. The CGMMix conducts manifold mixup at the image and feature levels, and adaptively lures the decision boundary away from the under-represented polyp class with the confidence guidance to alleviate the limited training dataset and the class imbalance problems. Two consistency regularizations, mixup feature map consistency (MFMC) loss and mixup confidence map consistency (MCMC) loss, are devised to exploit the consistent constraints in the training of the augmented mixup data. We then propose a two-branch approach, termed ThresholdNet, to collaborate the segmentation and threshold learning in an alternative training strategy. The threshold map supervision generator (TMSG) is embedded to provide supervision for the threshold map, thereby inducing better optimization of the threshold branch. As a consequence, ThresholdNet is able to calibrate the segmentation result with the learned threshold map. We illustrate the effectiveness of the proposed method on two polyp segmentation datasets, and our methods achieved the state-of-the-art result with 87.307% and 87.879% dice score on the EndoScene dataset and the WCE polyp dataset. The source code is available at https://github.com/Guo-Xiaoqing/ThresholdNet.
引用
收藏
页码:1134 / 1146
页数:13
相关论文
共 39 条
[1]  
Akbari M, 2018, IEEE ENG MED BIO, P69, DOI 10.1109/EMBC.2018.8512197
[2]  
[Anonymous], 2018, P ICLR
[3]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[4]  
Bagheri M, 2019, IEEE ENG MED BIO, P6742, DOI [10.1109/EMBC.2019.8856793, 10.1109/embc.2019.8856793]
[5]  
Berthelot D, 2019, ADV NEUR IN, V32
[6]   Semi-supervised and Task-Driven Data Augmentation [J].
Chaitanya, Krishna ;
Karani, Neerav ;
Baumgartner, Christian F. ;
Becker, Anton ;
Donati, Olivio ;
Konukoglu, Ender .
INFORMATION PROCESSING IN MEDICAL IMAGING, IPMI 2019, 2019, 11492 :29-41
[7]   Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Zhu, Yukun ;
Papandreou, George ;
Schroff, Florian ;
Adam, Hartwig .
COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 :833-851
[8]   Selective Feature Aggregation Network with Area-Boundary Constraints for Polyp Segmentation [J].
Fang, Yuqi ;
Chen, Cheng ;
Yuan, Yixuan ;
Tong, Kai-yu .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT I, 2019, 11764 :302-310
[9]   Semi-supervised WCE image classification with adaptive aggregated attention [J].
Guo, Xiaoqing ;
Yuan, Yixuan .
MEDICAL IMAGE ANALYSIS, 2020, 64
[10]   Triple ANet: Adaptive Abnormal-aware Attention Network for WCE Image Classification [J].
Guo, Xiaoqing ;
Yuan, Yixuan .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT I, 2019, 11764 :293-301