Occluded person re-identification based on differential attention siamese network

被引:4
|
作者
Wang, Liangbo [1 ]
Zhou, Yu [1 ]
Sun, Yanjing [1 ]
Li, Song [1 ]
机构
[1] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou 221116, Jiangsu, Peoples R China
基金
中国国家自然科学基金;
关键词
Occluded person re-identification; Differential occlusion weighted aggregation; Attention mechanism;
D O I
10.1007/s10489-021-02820-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Occluded person re-identification (ReID) aims to retrieve images of the same person with occlusion. The key is excavating discriminative features from the nonoccluded regions to achieve better matching. To this end, we propose a novel Differential Attention Siamese Network (DASN) for occluded person ReID. Specifically, the proposed DASN method includes a siamese network module and a Differential Occlusion Weighted Aggregation (DOWA) module. First, a shared two-branch framework with the original nonoccluded image and the corresponding occluded image generated by the random erasing operation as inputs is adopted for shared feature representation. Then, the DOWA module is proposed to obtain the difference features in nonoccluded regions by first subtracting the feature map of the occluded image from the original image to obtain features corresponding to the occluded regions, and subsequently subtracting these features from those corresponding to the original image. During this process, the attention mechanism is integrated to alleviate the exploration inaccuracy of the features of nonoccluded areas. In the training process, multi-dimensional loss functions are integrated as the mentor for more effective learning supervision. Finally, an occluded person ReID model with better feature representation capability is produced. Extensive experiments are conducted on five publicly released databases for ReID. The results demonstrate that the proposed method outperforms the state-of-the-art methods in the occluded person ReID task and another two ReID tasks.
引用
收藏
页码:7407 / 7419
页数:13
相关论文
共 50 条
  • [1] Occluded person re-identification based on differential attention siamese network
    Liangbo Wang
    Yu Zhou
    Yanjing Sun
    Song Li
    Applied Intelligence, 2022, 52 : 7407 - 7419
  • [2] Semantic guidance attention network for occluded person re-identification
    Ren X.
    Zhang D.
    Bao X.
    Li B.
    Tongxin Xuebao/Journal on Communications, 2021, 42 (10): : 106 - 116
  • [3] Feature attention fusion network for occluded person re-identification
    Zhuang, Xuyao
    Wei, Dan
    Liang, Danyang
    Jiang, Lei
    IMAGE AND VISION COMPUTING, 2024, 143
  • [4] Person Re-Identification by Siamese Network
    Shebiah, R. Newlin
    Arivazhagan, S.
    Amrith, S. G.
    Adarsh, S.
    INTELIGENCIA ARTIFICIAL-IBEROAMERICAL JOURNAL OF ARTIFICIAL INTELLIGENCE, 2023, 26 (71): : 25 - 33
  • [5] Reasoning and Tuning: Graph Attention Network for Occluded Person Re-Identification
    Huang, Meiyan
    Hou, Chunping
    Yang, Qingyuan
    Wang, Zhipeng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 1568 - 1582
  • [6] CNN Attention Enhanced ViT Network for Occluded Person Re-Identification
    Wang, Jing
    Li, Peitong
    Zhao, Rongfeng
    Zhou, Ruyan
    Han, Yanling
    APPLIED SCIENCES-BASEL, 2023, 13 (06):
  • [7] Dual attention-based method for occluded person re-identification
    Xu, Yunjie
    Zhao, Liaoying
    Qin, Feiwei
    KNOWLEDGE-BASED SYSTEMS, 2021, 212
  • [8] Joint Convolutional and Self-Attention Network for Occluded Person Re-Identification
    Yang, Chuxia
    Fan, Wanshu
    Zhou, Dongsheng
    Zhang, Qiang
    2022 18TH INTERNATIONAL CONFERENCE ON MOBILITY, SENSING AND NETWORKING, MSN, 2022, : 757 - 764
  • [9] OCCLUDED PERSON RE-IDENTIFICATION
    Zhuo, Jiaxuan
    Chen, Zeyu
    Lai, Jianhuang
    Wang, Guangcong
    2018 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2018,
  • [10] A Siamese inception architecture network for person re-identification
    Shuangqun Li
    Huadong Ma
    Machine Vision and Applications, 2017, 28 : 725 - 736