Global attention network for collaborative saliency detection

被引:2
作者
Li, Ce [1 ]
Xuan, Shuxing [1 ]
Liu, Fenghua [1 ]
Chang, Enbing [1 ]
Wu, Hailei [1 ]
机构
[1] Lanzhou Univ Technol, Coll Elect & Informat Engn, Lanzhou, Gansu, Peoples R China
关键词
Co-saliency; Collaborative correlation; Global information; Attention; MODEL;
D O I
10.1007/s13042-022-01531-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Collaborative saliency (cosaliency) detection aims to identify common and saliency objects or regions in a set of related images. The major challenge to address is how to extract useful information on single images and image groups to express collaborative saliency cues. In this paper, we propose a global attention network for cosaliency detection to extract individual features from the feature enhancement module (FEM). Then to capture useful global information, the global information module (GIM) is applied to all individual features to obtain individual cues, and finally, group collaborative cues are obtained by the collaboration correlation module (CCM). Specifically, the channel attention module and spatial attention module are plugged into the convolution feature network. To increase global context information, we perform global information module (GIM) on the preprocessed features and embed nonlocal modules in the backbone network and adopt global average pooling to extract global semantic representation vector as individual cues. Then, we build a collaborative correlation module (CCM) to extract collaborative and consistent information by calculating the correlation between the individual features of the input image and individual cues in the collaborative correlation module. We evaluate our method on two cosaliency detection benchmark datasets (CoSal2015, iCoSeg). Extensive experiments demonstrate the effectiveness of the proposed model, in most cases our method exceeds the state-of-the-art methods.
引用
收藏
页码:407 / 417
页数:11
相关论文
共 47 条
  • [1] [Anonymous], 2015, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2015.7298724
  • [2] iCoseg: Interactive Co-segmentation with Intelligent Scribble Guidance
    Batra, Dhruv
    Kowdle, Adarsh
    Parikh, Devi
    Luo, Jiebo
    Chen, Tsuhan
    [J]. 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 3169 - 3176
  • [3] Self-Adaptively Weighted Co-Saliency Detection via Rank Constraint
    Cao, Xiaochun
    Tao, Zhiqiang
    Zhang, Bao
    Fu, Huazhu
    Feng, Wei
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2014, 23 (09) : 4175 - 4186
  • [4] SalientShape: group saliency in image collections
    Cheng, Ming-Ming
    Mitra, Niloy J.
    Huang, Xiaolei
    Hu, Shi-Min
    [J]. VISUAL COMPUTER, 2014, 30 (04) : 443 - 453
  • [5] Global Contrast based Salient Region Detection
    Cheng, Ming-Ming
    Zhang, Guo-Xin
    Mitra, Niloy J.
    Huang, Xiaolei
    Hu, Shi-Min
    [J]. 2011 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2011, : 409 - 416
  • [6] Structure-measure: A New Way to Evaluate Foreground Maps
    Fan, Deng-Ping
    Cheng, Ming-Ming
    Liu, Yun
    Li, Tao
    Borji, Ali
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 4558 - 4567
  • [7] Taking a Deeper Look at Co-Salient Object Detection
    Fan, Deng-Ping
    Lin, Zheng
    Ji, Ge-Peng
    Zhang, Dingwen
    Fu, Huazhu
    Cheng, Ming-Ming
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 2916 - 2926
  • [8] Group Collaborative Learning for Co-Salient Object Detection
    Fan, Qi
    Fan, Deng-Ping
    Fu, Huazhu
    Tang, Chi-Keung
    Shao, Ling
    Tai, Yu-Wing
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12283 - 12293
  • [9] Cluster-Based Co-Saliency Detection
    Fu, Huazhu
    Cao, Xiaochun
    Tu, Zhuowen
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2013, 22 (10) : 3766 - 3778
  • [10] Co-saliency detection via inter and intra saliency propagation
    Ge, Chenjie
    Fu, Keren
    Liu, Fanghui
    Bai, Li
    Yang, Jie
    [J]. SIGNAL PROCESSING-IMAGE COMMUNICATION, 2016, 44 : 69 - 83