S2-Net:Semantic and Saliency Attention Network for Person Re-Identification

被引:10
|
作者
Ren, Xuena [1 ,2 ]
Zhang, Dongming [3 ]
Bao, Xiuguo [3 ]
Zhang, Yongdong [4 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100045, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100045, Peoples R China
[3] Natl Comp Network Emergency Response Tech Team Coo, Beijing 100029, Peoples R China
[4] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Human semantic mask; person Re-ID; saliency attention; semantic attention;
D O I
10.1109/TMM.2022.3174768
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Person re-identification is still a challenging task when moving objects or another person occludes the probe person. Mainstream methods based on even partitioning apply an off-the-shelf human semantic parsing to highlight the non-collusion part. In this paper, we apply an attention branch to learn the human semantic partition to avoid misalignment introduced by even partitioning. In detail, we propose a semantic attention branch to learn 5 human semantic maps. We also note that some accessories or belongings, such as a hat, bag, may provide more informative clues to improve the person Re-ID. Human semantic parsing, however, usually treats non-human parts as distractions and discards them. To fetch the missing clues, we design a branch to capture the salient non-human parts. Finally, we merge the semantic and saliency attention to build an end-to-end network, named as S-2-Net. Specifically, to further improve Re-ID, we develop a trade-off weighting scheme between semantic and saliency attention and set the right weight with the actual scene. The extensive experiments show that S-2-Net gets the competitive performance. S-2-Net achieves 87.4% mAP on Market1501 and obtains 79.3%/56.1% rank-1/mAP on MSMT17 without semantic supervision. The source codes are available at https://github.com/upgirlnana/S2Net.
引用
收藏
页码:4387 / 4399
页数:13
相关论文
共 50 条
  • [21] Attention-Aligned Network for Person Re-Identification
    Lian, Sicheng
    Jiang, Weitao
    Hu, Haifeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (08) : 3140 - 3153
  • [22] Adaptive Graph Attention Network in Person Re-Identification
    Duy, L. D.
    Hung, P. D.
    PATTERN RECOGNITION AND IMAGE ANALYSIS, 2022, 32 (02) : 384 - 392
  • [23] Unsupervised Region Attention Network for Person Re-Identification
    Zhang, Chenrui
    Wu, Yangxu
    Lei, Tao
    IEEE ACCESS, 2019, 7 : 165520 - 165530
  • [24] Saliency Weighted Features for Person Re-identification
    Martinel, Niki
    Micheloni, Christian
    Foresti, Gian Luca
    COMPUTER VISION - ECCV 2014 WORKSHOPS, PT III, 2015, 8927 : 191 - 208
  • [25] Person Re-identification Based on Visual Saliency
    Liu, Ying
    Shao, Yu
    Sun, Fuchun
    2012 12TH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS (ISDA), 2012, : 884 - 889
  • [26] SALIENCY PREPROCESSING FOR PERSON RE-IDENTIFICATION IMAGES
    Ma, Cong
    Miao, Zhenjiang
    Li, Min
    2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 1941 - 1945
  • [27] MHSA-Net: Multihead Self-Attention Network for Occluded Person Re-Identification
    Tan, Hongchen
    Liu, Xiuping
    Yin, Baocai
    Li, Xin
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (11) : 8210 - 8224
  • [28] MBA-Net: multi-branch attention network for occluded person re-identification
    Hong, Xing
    Zhang, Langwen
    Yu, Xiaoyuan
    Xie, Wei
    Xie, Yumin
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 83 (2) : 6393 - 6412
  • [29] MBA-Net: multi-branch attention network for occluded person re-identification
    Xing Hong
    Langwen Zhang
    Xiaoyuan Yu
    Wei Xie
    Yumin Xie
    Multimedia Tools and Applications, 2024, 83 : 6393 - 6412
  • [30] IAB-Net: Informative and Attention Based Person Re-Identification
    Faizan, Rao
    Fraz, Muhammad Moazam
    Shahzad, Muhammad
    2021 INTERNATIONAL CONFERENCE ON DIGITAL FUTURES AND TRANSFORMATIVE TECHNOLOGIES (ICODT2), 2021,