An Improved Deep Mutual-Attention Learning Model for Person Re-Identification

被引:3
作者
Jamal, Miftah Bedru [1 ]
Jiang Zhengang [1 ]
Ming, Fang [1 ,2 ]
机构
[1] Changchun Univ Sci & Technol, Sch Comp Sci, Changchun 130022, Peoples R China
[2] Changchun Univ Sci & Technol, Sch Artificial Intelligence, Changchun 130022, Peoples R China
来源
SYMMETRY-BASEL | 2020年 / 12卷 / 03期
关键词
person re-identification; mutual-attention; classification; verification; NEURAL-NETWORKS;
D O I
10.3390/sym12030358
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Person re-identification is the task of matching pedestrian images across a network of non-overlapping camera views. It poses aggregated challenges resulted from random human pose, clutter from the background, illumination variations, and other factors. There has been a vast number of studies in recent years with promising success. However, key challenges have not been adequately addressed and continue to result in sub-optimal performance. Attention-based person re-identification gains more popularity in identifying discriminatory features from person images. Its potential in terms of extracting features common to a pair of person images across the feature extraction pipeline has not been be fully exploited. In this paper, we propose a novel attention-based Siamese network driven by a mutual-attention module decomposed into spatial and channel components. The proposed mutual-attention module not only leads feature extraction to the discriminative part of individual images, but also fuses mutual features symmetrically across pairs of person images to get informative regions common to both input images. Our model simultaneously learns feature embedding for discriminative cues and the similarity measure. The proposed model is optimized with multi-task loss, namely classification and verification loss. It is further optimized by a learnable mutual-attention module to facilitate an efficient and adaptive learning. The proposed model is thoroughly evaluated on extensively used large-scale datasets, Market-1501 and Duke-MTMC-ReID. Our experimental results show competitive results with the state-of-the-art works and the effectiveness of the mutual-attention module.
引用
收藏
页数:18
相关论文
共 59 条
  • [1] Abdel-Hamid Ossama., 2013, Interspeech, V2013, P1173, DOI [DOI 10.1093/JNCI/58.4.1173, DOI 10.21437/INTERSPEECH.2013-744]
  • [2] Ahmed E, 2015, PROC CVPR IEEE, P3908, DOI 10.1109/CVPR.2015.7299016
  • [3] [Anonymous], PROC CVPR IEEE
  • [4] [Anonymous], ARXIV180411027
  • [5] [Anonymous], 2015, P 2015 C N AM CHAPT
  • [6] [Anonymous], 2017, P 31 AAAI C ART INT
  • [7] [Anonymous], 2015, Deep learning Nature, DOI [DOI 10.1038/NATURE14539, 10.1038/nature14539]
  • [8] [Anonymous], 2016, P 24 ACM INT C MULTI
  • [9] [Anonymous], P IEEE C COMP VIS PA
  • [10] [Anonymous], 2017, ARXIV170504724