Multitask Multigranularity Aggregation With Global-Guided Attention for Video Person Re-Identification

被引:7
作者
Sun, Dengdi [1 ]
Huang, Jiale [2 ]
Hu, Lei [2 ]
Tang, Jin [3 ]
Ding, Zhuanlian [4 ]
机构
[1] Anhui Univ, Sch Artificial Intelligence, Key Lab Intelligent Comp & Signal Proc ICSP, Minist Educ, Hefei 230601, Peoples R China
[2] Anhui Univ, Wendian Coll, Hefei 230601, Peoples R China
[3] Anhui Univ, Sch Comp Sci & Technol, Anhui Prov Key Lab Multimodal Cognit Computat, Hefei 230601, Peoples R China
[4] Anhui Univ, Sch Internet, Hefei 230039, Peoples R China
关键词
Feature extraction; Multitasking; Video sequences; Task analysis; Data mining; Semantics; Convolutional neural networks; Person re-identification; video; multi-task; multi-granularity; attention mechanism; global feature; SET;
D O I
10.1109/TCSVT.2022.3183011
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The goal of video-based person re-identification (Re-ID) is to identify the same person across multiple non-overlapping cameras. The key to accomplishing this challenging task is to sufficiently exploit both spatial and temporal cues in video sequences. However, most current methods are incapable of accurately locating semantic regions or efficiently filtering discriminative spatio-temporal features; so it is difficult to handle issues such as spatial misalignment and occlusion. Thus, we propose a novel feature aggregation framework, multi-task and multi-granularity aggregation with global-guided attention (MMA-GGA), which aims to adaptively generate more representative spatio-temporal aggregation features. Specifically, we develop a multi-task multi-granularity aggregation (MMA) module to extract features at different locations and scales to identify key semantic-aware regions that are robust to spatial misalignment. Then, to determine the importance of the multi-granular semantic information, we propose a global-guided attention (GGA) mechanism to learn weights based on the global features of the video sequence, allowing our framework to identify stable local features while ignoring occlusions. Therefore, the MMA-GGA framework can efficiently and effectively capture more robust and representative features. Extensive experiments on four benchmark datasets demonstrate that our MMA-GGA framework outperforms current state-of-the-art methods. In particular, our method achieves a rank-1 accuracy of 91.0% on the MARS dataset, the most widely used database, significantly outperforming existing methods.
引用
收藏
页码:7758 / 7771
页数:14
相关论文
共 70 条
[21]   Global-Local Temporal Representations For Video Person Re-Identification [J].
Li, Jianing ;
Wang, Jingdong ;
Tian, Qi ;
Gao, Wen ;
Zhang, Shiliang .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :3957-3966
[22]  
Li JN, 2019, AAAI CONF ARTIF INTE, P8618
[23]   Unsupervised Person Re-identification by Deep Learning Tracklet Association [J].
Li, Minxian ;
Zhu, Xiatian ;
Gong, Shaogang .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :772-788
[24]   Hierarchical Temporal Modeling With Mutual Distance Matching for Video Based Person Re-Identification [J].
Li, Peike ;
Pan, Pingbo ;
Liu, Ping ;
Xu, Mingliang ;
Yang, Yi .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (02) :503-511
[25]   Diversity Regularized Spatiotemporal Attention for Video-based Person Re-identification [J].
Li, Shuang ;
Bak, Slawomir ;
Carr, Peter ;
Wang, Xiaogang .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :369-378
[26]  
Li XZ, 2020, AAAI CONF ARTIF INTE, V34, P11434
[27]   Video-Based Person Re-Identification With Accumulative Motion Context [J].
Liu, Hao ;
Jie, Zequn ;
Jayashree, Karlekar ;
Qi, Meibin ;
Jiang, Jianguo ;
Yan, Shuicheng ;
Feng, Jiashi .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2018, 28 (10) :2788-2802
[28]   Watching You: Global-guided Reciprocal Learning for Video-based Person Re-identification [J].
Liu, Xuehu ;
Zhang, Pingping ;
Yu, Chenyang ;
Lu, Huchuan ;
Yang, Xiaoyun .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, :13329-13338
[29]  
Liu YH, 2019, AAAI CONF ARTIF INTE, P8786
[30]   Quality Aware Network for Set to Set Recognition [J].
Liu, Yu ;
Yan, Junjie ;
Ouyang, Wanli .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :4694-4703