Constructing comprehensive and discriminative representations with diverse attention for occluded person re-identification

被引:0
作者
Ren, Tengfei
Lian, Qiusheng [1 ]
Zhang, Dan
机构
[1] Yanshan Univ, Sch Informat Sci & Engn, QinhuangDao 066004, Hebei, Peoples R China
关键词
Person re-identification; Representation learning; Diverse attention; Transformer;
D O I
10.1016/j.jvcir.2023.103993
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Occluded person re-identification (Re-ID) is a challenging task that aims to match occluded person images to holistic ones across different camera views. Feature diversity is crucial for achieving high-performance Re-ID. Previous methods rely on additional annotations or hand-crafted rules to achieve feature diversity, which are inefficient or infeasible for occluded Re-ID. To address this, we propose the Diverse Attention Net (DANet) which utilizes attention mechanism to achieve diverse feature mining. Specifically, DANet incorporates a pair of complementary Diverse Parallel Attention Modules (DPAM), which, under the attention decorrelation constraint (ADC), help the model automatically capture diverse discriminative features in a global scope. Additionally, we propose an Efficient Transformer layer that can seamlessly integrate with the proposed DPAM and synergistically enhance the capability to handle occlusions. The resulting DANet construct a set of comprehensive representations that encode diverse discriminative features. Extensive experiments demonstrate DANet achieves state-of-the-art performance on both occluded and holistic ReID benchmarks.
引用
收藏
页数:12
相关论文
共 58 条
[41]   Robust Video-Based Person Re-Identification by Hierarchical Mining [J].
Wang, Zhikang ;
He, Lihuo ;
Tu, Xiaoguang ;
Zhao, Jian ;
Gao, Xinbo ;
Shen, Shengmei ;
Feng, Jiashi .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (12) :8179-8191
[42]   Learning Diverse Features with Part-Level Resolution for Person Re-identification [J].
Xie, Ben ;
Wu, Xiaofu ;
Zhang, Suofei ;
Zhao, Shiliang ;
Li, Ming .
PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2020, PT III, 2020, 12307 :16-28
[43]   Black Re-ID: A Head-shoulder Descriptor for the Challenging Problem of Person Re-Identification [J].
Xu, Boqiang ;
He, Lingxiao ;
Liao, Xingyu ;
Liu, Wu ;
Sun, Zhenan ;
Mei, Tao .
MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, :673-681
[44]   Deep Multi-View Enhancement Hashing for Image Retrieval [J].
Yan, Chenggang ;
Gong, Biao ;
Wei, Yuxuan ;
Gao, Yue .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (04) :1445-1451
[45]   Age-Invariant Face Recognition by Multi-Feature Fusion and Decomposition with Self-attention [J].
Yan, Chenggang ;
Meng, Lixuan ;
Li, Liang ;
Zhang, Jiehua ;
Wang, Zhan ;
Yin, Jian ;
Zhang, Jiyong ;
Sun, Yaoqi ;
Zheng, Bolun .
ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2022, 18 (01)
[46]   Task-Adaptive Attention for Image Captioning [J].
Yan, Chenggang ;
Hao, Yiming ;
Li, Liang ;
Yin, Jian ;
Liu, Anan ;
Mao, Zhendong ;
Chen, Zhenyu ;
Gao, Xingyu .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (01) :43-51
[47]   Learning to disentangle scenes for person re-identification [J].
Zang, Xianghao ;
Li, Ge ;
Gao, Wei ;
Shu, Xiujun .
IMAGE AND VISION COMPUTING, 2021, 116
[48]   Relation-Aware Global Attention for Person Re-identification [J].
Zhang, Zhizheng ;
Lan, Cuiling ;
Zeng, Wenjun ;
Jin, Xin ;
Chen, Zhibo .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :3183-3192
[49]   Deeply-Learned Part-Aligned Representations for Person Re-Identification [J].
Zhao, Liming ;
Li, Xi ;
Zhuang, Yueting ;
Wang, Jingdong .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3239-3248
[50]   Do Not Disturb Me: Person Re-identification Under the Interference of Other Pedestrians [J].
Zhao, Shizhen ;
Gao, Changxin ;
Zhang, Jun ;
Cheng, Hao ;
Han, Chuchu ;
Jiang, Xinyang ;
Guo, Xiaowei ;
Zheng, Wei-Shi ;
Sang, Nong ;
Sun, Xing .
COMPUTER VISION - ECCV 2020, PT VI, 2020, 12351 :647-663