Segmentation from localization: a weakly supervised semantic segmentation method for resegmenting CAM

被引:0
作者
Jiang, Jingjing [1 ]
Wang, Hongxia [1 ]
Wu, Jiali [1 ]
Liu, Chun [1 ]
机构
[1] Wuhan Univ Technol, Sch Comp Sci & Artificial Intelligence, Wuhan 430070, Hubei, Peoples R China
关键词
Image segmentation; Weakly supervised semantic segmentation; Class activation map; Class-agnostic segmentation;
D O I
10.1007/s11042-023-17779-4
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Semantic segmentation has wide applications in computer vision tasks. Due to the high labor cost of pixel-level annotation, weakly supervised semantic segmentation(WSSS) methods based on image-level labels have become an important research topic. However, existing WSSS based on image-level labels has problems such as sparse segmentation results and inaccurate object boundaries. To overcome these problems, we propose a novel locate-then-segment framework that separates the localization process and segmentation process of WSSS. During the localization process we use class activation map(CAM) to locate the rough position of the object as most WSSS methods do. During the segmentation process, we focused on designing the object segmenter to refine the CAM to obtain the pseudo mask. The object segmenter consists of a dual localization feature fusion module and a boundary enhancement decoder. The former effectively extracts the semantic features of the object and finds the whole object; the latter judges long-range pixels to search for the exact object boundary. Additionally, we utilize extra pixel-level labels to train our object segmenter and add some constraints to optimize its training process. Finally, we apply the trained object segmenter to weakly supervised segmented data to improve the prediction results of CAM. Experimental results show that our proposed method significantly improves the quality of pseudo masks and obtains competitive segmentation results. Compared to existing methods, our method has the best result on the PASCAL VOC 2012 validation set with 68.8% mIoU and the competitive result on the test set with 67.9% mIoU. Our method outperforms all CNN-based methods on the MS COCO 2014 validation set, second only to transformer-based methods, achieving 36.5% mIoU. Code is available at https://github.com/wjlbnw/SegmentationFromLocalization.
引用
收藏
页码:57785 / 57810
页数:26
相关论文
共 66 条
  • [41] Pinheiro PO, 2015, PROC CVPR IEEE, P1713, DOI 10.1109/CVPR.2015.7298780
  • [42] GrabCut - Interactive foreground extraction using iterated graph cuts
    Rother, C
    Kolmogorov, V
    Blake, A
    [J]. ACM TRANSACTIONS ON GRAPHICS, 2004, 23 (03): : 309 - 314
  • [43] Learning Affinity from Attention: End-to-End Weakly-Supervised Semantic Segmentation with Transformers
    Ru, Lixiang
    Zhan, Yibing
    Yu, Baosheng
    Du, Bo
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16825 - 16834
  • [44] Rube IE, 1994, IEEE Transactions on Pattern Analysis and Machine Intelligence, V16, P641
  • [45] Sharma I., 2023, 2023 3 INT S INSTR C, P105
  • [46] SYNCHRONOUS FEDERATED LEARNING BASED MULTI UNMANNED AERIAL VEHICLES FOR SECURE APPLICATIONS
    Sharma, Itika
    Gupta, Sachin Kumar
    Mishra, Ashutosh
    Askar, Shavan
    [J]. SCALABLE COMPUTING-PRACTICE AND EXPERIENCE, 2023, 24 (03): : 191 - 201
  • [47] Box-driven Class-wise Region Masking and Filling Rate Guided Loss for Weakly Supervised Semantic Segmentation
    Song, Chunfeng
    Huang, Yan
    Ouyang, Wanli
    Wang, Liang
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 3131 - 3140
  • [48] Su Y, 2021, Context decoupling augmentation
  • [49] Learning random-walk label propagation for weakly-supervised semantic segmentation
    Vernaza, Paul
    Chandraker, Manmohan
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2953 - 2961
  • [50] Weakly-Supervised Semantic Segmentation by Iterative Affinity Learning
    Wang, Xiang
    Liu, Sifei
    Ma, Huimin
    Yang, Ming-Hsuan
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2020, 128 (06) : 1736 - 1749