AAGCN: Adjacency-aware Graph Convolutional Network for person re-identification

被引:27
作者
Pan, Honghu [1 ]
Bai, Yang [1 ]
He, Zhenyu [1 ,2 ]
Zhang, Chunkai [1 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Peoples R China
[2] Peng Cheng Lab, Shenzhen 518055, Peoples R China
关键词
Person re-identification; Graph Convolutional Network; Mahalanobis distance; NEURAL-NETWORK; PERFORMANCE;
D O I
10.1016/j.knosys.2021.107300
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Person re-identification (ReID) is an important topic of computer vision. Existing works in this field focus primarily on learning a feature extractor that maps the pedestrian images into a feature space, in which feature vectors corresponding to the same identity are close to each other. In this paper, we propose the adjacency-aware Graph Convolutional Network (AAGCN) to smooth the intra-class features and thus reduce the intra-class variance. Specifically, our AAGCN takes the features learned by a backbone as the input nodes; it first establishes the connections or adjacency relations for the intra-class features, then the adjacent nodes (i.e., the intra-class features) would be smoothed thanks to the property of low-pass filtering of Graph Convolutional Network (GCN). In this paper, we propose two methods, i.e., the Mahalanobis Neighborhood Adjacency (MNA) and Non-Linear Mapping (NLM), to learn the adjacency relations for the intra-class features. The MNA defines the adjacency weight between two nodes as the negative exponent of the Mahalanobis distance between their corresponding features, therefore it aims to learn a small Mahalanobis distance between the intra-class features and a large Mahalanobis distance between the inter-class ones. The NLM enables the non-linear mapping from the features of the nodes to their corresponding adjacency weights. The experimental results on both visible ReID and visual-infrared ReID verify the effectiveness of our method, for instance, our model achieves 95.7% rank-1 and 93.1% mAP on Market1501, as well as 58.6% rank-1 and 60.0% mAP on SYSU. (c) 2021 Published by Elsevier B.V.
引用
收藏
页数:11
相关论文
共 72 条
[1]  
Bruna J, 2014, P INT C LEARN REPR
[2]   Mixed High-Order Attention Network for Person Re-Identification [J].
Chen, Binghui ;
Deng, Weihong ;
Hu, Jiani .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :371-381
[3]   Multi-Label Image Recognition with Graph Convolutional Networks [J].
Chen, Zhao-Min ;
Wei, Xiu-Shen ;
Wang, Peng ;
Guo, Yanwen .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5172-5181
[4]   Learning Implicit Fields for Generative Shape Modeling [J].
Chen, Zhiqin ;
Zhang, Hao .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5932-5941
[5]   Hi-CMD: Hierarchical Cross-Modality Disentanglement for Visible-Infrared Person Re-Identification [J].
Choi, Seokeon ;
Lee, Sumin ;
Kim, Youngeun ;
Kim, Taekyung ;
Kim, Changick .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :10254-10263
[6]  
Chung F.R.K., 1997, CBMS REGIONAL C SERI, V92, DOI DOI 10.1090/CBMS/092
[7]  
Dai PY, 2018, PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P677
[8]  
Defferrard M, 2016, ADV NEUR IN, V29
[9]   Learning Modality-Specific Representations for Visible-Infrared Person Re-Identification [J].
Feng, Zhanxiang ;
Lai, Jianhuang ;
Xie, Xiaohua .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :579-590
[10]  
Gilmer J, 2017, PR MACH LEARN RES, V70