RPNet: Gait Recognition With Relationships Between Each Body-Parts

被引:27
作者
Qin, Hao [1 ]
Chen, Zhenxue [1 ,2 ]
Guo, Qingqiang [1 ]
Wu, Q. M. Jonathan [3 ]
Lu, Mengxu [1 ]
机构
[1] Shandong Univ, Sch Control Sci & Engn, Jinan 250061, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Univ Windsor, Dept Elect & Comp Engn, Windsor, ON N9B 3P4, Canada
基金
中国国家自然科学基金;
关键词
Feature extraction; Gait recognition; Legged locomotion; Data models; Analytical models; Convolutional neural networks; Convolution; convolutional neural network (CNN); partial relationship; different scale blocks; TRANSFORMATION MODEL; VIEW; ATTENTION;
D O I
10.1109/TCSVT.2021.3095290
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
At present, many studies have shown that partitioning the gait sequence and its feature map can improve the accuracy of gait recognition. However, most models just cut the feature map at a fixed single scale, which loses the dependence between various parts. So, our paper proposes a structure called Part Feature Relationship Extractor (PFRE) to discover all of the relationships between each parts for gait recognition. The paper uses PFRE and a Convolutional Neural Network (CNN) to form the RPNet. PFRE is divided into two parts. One part that we call the Total-Partial Feature Extractor (TPFE) is used to extract the features of different scale blocks, and the other part, called the Adjacent Feature Relation Extractor (AFRE), is used to find the relationships between each block. At the same time, the paper adjusts the number of input frames during training to perform quantitative experiments and finds the rule between the number of input frames and the performance of the model. Our model is tested on three public gait datasets, CASIA-B, OU-LP and OU-MVLP. It exhibits a significant level of robustness to occlusion situations, and achieves accuracies of 92.82% and 80.26% on CASIA-B under BG # and CL # conditions, respectively. The results show that our method reaches the top level among state-of-the-art methods.
引用
收藏
页码:2990 / 3000
页数:11
相关论文
共 49 条
[1]   Covariate Conscious Approach for Gait Recognition Based Upon Zernike Moment Invariants [J].
Aggarwal, Himanshu ;
Vishwakarma, Dinesh Kumar .
IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2018, 10 (02) :397-407
[2]  
Bashir K., 2010, BMVC, DOI DOI 10.1049/IC.2009.0230
[3]   Gait recognition without subject cooperation [J].
Bashir, Khalid ;
Xiang, Tao ;
Gong, Shaogang .
PATTERN RECOGNITION LETTERS, 2010, 31 (13) :2052-2060
[4]   Coupled Bilinear Discriminant Projection for Cross-View Gait Recognition [J].
Ben, Xianye ;
Gong, Chen ;
Zhang, Peng ;
Yan, Rui ;
Wu, Qiang ;
Meng, Weixiao .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2020, 30 (03) :734-747
[5]   A general tensor representation framework for cross-view gait recognition [J].
Ben, Xianye ;
Zhang, Peng ;
Lai, Zhihui ;
Yan, Rui ;
Zhai, Xinliang ;
Meng, Weixiao .
PATTERN RECOGNITION, 2019, 90 :87-98
[6]   Coupled Patch Alignment for Matching Cross-View Gaits [J].
Ben, Xianye ;
Gong, Chen ;
Zhang, Peng ;
Jia, Xitong ;
Wu, Qiang ;
Meng, Weixiao .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (06) :3142-3157
[7]   On Using Gait in Forensic Biometrics [J].
Bouchrika, Imed ;
Goffredo, Michaela ;
Carter, John ;
Nixon, Mark .
JOURNAL OF FORENSIC SCIENCES, 2011, 56 (04) :882-889
[8]   Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields [J].
Cao, Zhe ;
Simon, Tomas ;
Wei, Shih-En ;
Sheikh, Yaser .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1302-1310
[9]  
Chao HQ, 2019, AAAI CONF ARTIF INTE, P8126
[10]   Gait recognition based on improved dynamic Bayesian networks [J].
Chen, Changhong ;
Liang, Jimin ;
Zhu, Xiuchang .
PATTERN RECOGNITION, 2011, 44 (04) :988-995