S2-Net:Semantic and Saliency Attention Network for Person Re-Identification

被引:10
|
作者
Ren, Xuena [1 ,2 ]
Zhang, Dongming [3 ]
Bao, Xiuguo [3 ]
Zhang, Yongdong [4 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, Beijing 100045, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100045, Peoples R China
[3] Natl Comp Network Emergency Response Tech Team Coo, Beijing 100029, Peoples R China
[4] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Anhui, Peoples R China
基金
中国国家自然科学基金;
关键词
Human semantic mask; person Re-ID; saliency attention; semantic attention;
D O I
10.1109/TMM.2022.3174768
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Person re-identification is still a challenging task when moving objects or another person occludes the probe person. Mainstream methods based on even partitioning apply an off-the-shelf human semantic parsing to highlight the non-collusion part. In this paper, we apply an attention branch to learn the human semantic partition to avoid misalignment introduced by even partitioning. In detail, we propose a semantic attention branch to learn 5 human semantic maps. We also note that some accessories or belongings, such as a hat, bag, may provide more informative clues to improve the person Re-ID. Human semantic parsing, however, usually treats non-human parts as distractions and discards them. To fetch the missing clues, we design a branch to capture the salient non-human parts. Finally, we merge the semantic and saliency attention to build an end-to-end network, named as S-2-Net. Specifically, to further improve Re-ID, we develop a trade-off weighting scheme between semantic and saliency attention and set the right weight with the actual scene. The extensive experiments show that S-2-Net gets the competitive performance. S-2-Net achieves 87.4% mAP on Market1501 and obtains 79.3%/56.1% rank-1/mAP on MSMT17 without semantic supervision. The source codes are available at https://github.com/upgirlnana/S2Net.
引用
收藏
页码:4387 / 4399
页数:13
相关论文
共 50 条
  • [1] Semantic guidance attention network for occluded person re-identification
    Ren X.
    Zhang D.
    Bao X.
    Li B.
    Tongxin Xuebao/Journal on Communications, 2021, 42 (10): : 106 - 116
  • [2] Dual semantic interdependencies attention network for person re-identification
    Yang, Shengrong
    Hu, Haifeng
    Chen, Dihu
    Su, Tao
    ELECTRONICS LETTERS, 2020, 56 (25) : 1411 - 1413
  • [3] Attribute saliency network for person re-identification
    Tay, Chiat-Pin
    Yap, Kim-Hui
    IMAGE AND VISION COMPUTING, 2021, 115
  • [4] CASCADE ATTENTION NETWORK FOR PERSON RE-IDENTIFICATION
    Guo, Haiyun
    Wu, Huiyao
    Zhao, Chaoyang
    Zhang, Huichen
    Wang, Jinqiao
    Lu, Hanqing
    2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2264 - 2268
  • [5] Pose transfer generation with semantic parsing attention network for person re-identification
    Liu, Meichen
    Wang, Kejun
    Ji, Ruihang
    Ge, Shuzhi Sam
    Chen, Jing
    KNOWLEDGE-BASED SYSTEMS, 2021, 223
  • [6] Semantic driven attention network with attribute learning for unsupervised person re-identification
    Xu, Simin
    Luo, Lingkun
    Hu, Jilin
    Yang, Bin
    Hu, Shiqiang
    KNOWLEDGE-BASED SYSTEMS, 2022, 252
  • [7] Related Attention Network for Person Re-identification
    Liang, Jiali
    Zeng, Dan
    Chen, Shuaijun
    Tian, Qi
    2019 IEEE FIFTH INTERNATIONAL CONFERENCE ON MULTIMEDIA BIG DATA (BIGMM 2019), 2019, : 366 - 372
  • [8] VAC-Net: Visual Attention Consistency Network for Person Re-identification
    Shi, Weidong
    Zhang, Yunzhou
    Zhu, Shangdong
    Liu, Yixiu
    Coleman, Sonya
    Kerr, Dermot
    PROCEEDINGS OF THE 2022 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, ICMR 2022, 2022, : 571 - 578
  • [9] PMA-Net: A parallelly mixed attention network for person re-identification
    Qu, Junsuo
    Zhang, Yanghai
    Zhang, Zhenguo
    DISPLAYS, 2023, 78
  • [10] Harmonious Attention Network for Person Re-Identification
    Li, Wei
    Zhu, Xiatian
    Gong, Shaogang
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 2285 - 2294