Background Activation Suppression for Weakly Supervised Object Localization and Semantic Segmentation

被引:10
作者
Zhai, Wei [1 ]
Wu, Pingyu [1 ]
Zhu, Kai [1 ]
Cao, Yang [1 ,2 ]
Wu, Feng [1 ,2 ]
Zha, Zheng-Jun [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Hefei, Peoples R China
关键词
Weakly supervised; Object localization; Background activation suppression; Semantic segmentation;
D O I
10.1007/s11263-023-01919-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Weakly supervised object localization and semantic segmentation aim to localize objects using only image-level labels. Recently, a new paradigm has emerged by generating a foreground prediction map (FPM) to achieve pixel-level localization. While existing FPM-based methods use cross-entropy to evaluate the foreground prediction map and to guide the learning of the generator, this paper presents two astonishing experimental observations on the object localization learning process: For a trained network, as the foreground mask expands, (1) the cross-entropy converges to zero when the foreground mask covers only part of the object region. (2) The activation value continuously increases until the foreground mask expands to the object boundary. Therefore, to achieve a more effective localization performance, we argue for the usage of activation value to learn more object regions. In this paper, we propose a background activation suppression (BAS) method. Specifically, an activation map constraint module is designed to facilitate the learning of generator by suppressing the background activation value. Meanwhile, by using foreground region guidance and area constraint, BAS can learn the whole region of the object. In the inference phase, we consider the prediction maps of different categories together to obtain the final localization results. Extensive experiments show that BAS achieves significant and consistent improvement over the baseline methods on the CUB-200-2011 and ILSVRC datasets. In addition, our method also achieves state-of-the-art weakly supervised semantic segmentation performance on the PASCAL VOC 2012 and MS COCO 2014 datasets. Code and models are available at https://github.com/wpy1999/BAS-Extension.
引用
收藏
页码:750 / 775
页数:26
相关论文
共 80 条
  • [71] Zhang D., 2020, ADV NEURAL INFORM PR, V33, P655, DOI DOI 10.5555/3495724.3495780
  • [72] Weakly Supervised Object Localization and Detection: A Survey
    Zhang, Dingwen
    Han, Junwei
    Cheng, Gong
    Yang, Ming-Hsuan
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (09) : 5866 - 5885
  • [73] From Discriminant to Complete: Reinforcement Searching-Agent Learning for Weakly Supervised Object Detection
    Zhang, Dingwen
    Han, Junwei
    Zhao, Long
    Zhao, Tao
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (12) : 5549 - 5560
  • [74] Leveraging Prior-Knowledge for Weakly Supervised Object Detection Under a Collaborative Self-Paced Curriculum Learning Framework
    Zhang, Dingwen
    Han, Junwei
    Zhao, Long
    Meng, Deyu
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2019, 127 (04) : 363 - 380
  • [75] Zhang Fei, 2021, P IEEECVF INT C COMP, P7242
  • [76] Zhang X., 2021, EUROPEAN C COMPUTER, V17, P1519, DOI DOI 10.1080/15548627.2020.1840796
  • [77] Adversarial Complementary Learning for Weakly Supervised Object Localization
    Zhang, Xiaolin
    Wei, Yunchao
    Feng, Jiashi
    Yang, Yi
    Huang, Thomas
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1325 - 1334
  • [78] Self-produced Guidance for Weakly-Supervised Object Localization
    Zhang, Xiaolin
    Wei, Yunchao
    Kang, Guoliang
    Yang, Yi
    Huang, Thomas
    [J]. COMPUTER VISION - ECCV 2018, PT XII, 2018, 11216 : 610 - 625
  • [79] Learning Deep Features for Discriminative Localization
    Zhou, Bolei
    Khosla, Aditya
    Lapedriza, Agata
    Oliva, Aude
    Torralba, Antonio
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 2921 - 2929
  • [80] Zhu L., 2022, arXiv