MECPformer: multi-estimations complementary patch with CNN-transformers for weakly supervised semantic segmentation

被引:3
作者
Liu, Chunmeng [1 ]
Li, Guangyao [1 ]
Shen, Yao [1 ]
Wang, Ruiqi [1 ]
机构
[1] Tongji Univ, Coll Elect & Informat Engn, Shanghai 201804, Peoples R China
关键词
Weakly supervised learning; Semantic segmentation; Transformer; CNN; Computer vision;
D O I
10.1007/s00521-023-08816-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The initial seed based on the convolutional neural network (CNN) for weakly supervised semantic segmentation always highlights the most discriminative regions but fails to identify the global target information. Methods based on transformers have been proposed successively benefiting from the advantage of capturing long-range feature representations. However, we observe a flaw regardless of the gifts based on the transformer. Given a class, the initial seeds generated based on the transformer may invade regions belonging to other classes. Inspired by the mentioned issues, we devise a simple yet effective method with multi-estimations complementary patch (MECP) strategy and adaptive conflict module (ACM), dubbed MECPformer. Given an image, we manipulate it with the MECP strategy at different epochs, and the network mines and deeply fuses the semantic information at different levels. In addition, ACM adaptively removes conflicting pixels and exploits the network self-training capability to mine potential target information. Without bells and whistles, our MECPformer has reached new state-of-the-art 72.0% mIoU on the PASCAL VOC 2012 and 42.4% on MS COCO 2014 dataset. The code is available at https://github.com/ChunmengLiu1/MECPformer.
引用
收藏
页码:23249 / 23264
页数:16
相关论文
共 76 条
  • [1] SLIC Superpixels Compared to State-of-the-Art Superpixel Methods
    Achanta, Radhakrishna
    Shaji, Appu
    Smith, Kevin
    Lucchi, Aurelien
    Fua, Pascal
    Suesstrunk, Sabine
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) : 2274 - 2281
  • [2] Learning Pixel-level Semantic Affinity with Image-level Supervision forWeakly Supervised Semantic Segmentation
    Ahn, Jiwoon
    Kwak, Suha
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4981 - 4990
  • [3] ViViT: A Video Vision Transformer
    Arnab, Anurag
    Dehghani, Mostafa
    Heigold, Georg
    Sun, Chen
    Lucic, Mario
    Schmid, Cordelia
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 6816 - 6826
  • [4] What's the Point: Semantic Segmentation with Point Supervision
    Bearman, Amy
    Russakovsky, Olga
    Ferrari, Vittorio
    Fei-Fei, Li
    [J]. COMPUTER VISION - ECCV 2016, PT VII, 2016, 9911 : 549 - 565
  • [5] Chang YT, 2020, PROC CVPR IEEE, P8988, DOI 10.1109/CVPR42600.2020.00901
  • [6] Chaudhry A, 2017, Arxiv, DOI arXiv:1707.05821
  • [7] Chen JL, 2022, Arxiv, DOI arXiv:2210.14618
  • [8] DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
    Chen, Liang-Chieh
    Papandreou, George
    Kokkinos, Iasonas
    Murphy, Kevin
    Yuille, Alan L.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) : 834 - 848
  • [9] Chen Liyi, 2020, EUR C COMP VIS ECCV, P347
  • [10] Self-supervised Image-specific Prototype Exploration for Weakly Supervised Semantic Segmentation
    Chen, Qi
    Yang, Lingxiao
    Lai, Jianhuang
    Xie, Xiaohua
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 4278 - 4288