RPNet: Gait Recognition With Relationships Between Each Body-Parts

被引:27
作者
Qin, Hao [1 ]
Chen, Zhenxue [1 ,2 ]
Guo, Qingqiang [1 ]
Wu, Q. M. Jonathan [3 ]
Lu, Mengxu [1 ]
机构
[1] Shandong Univ, Sch Control Sci & Engn, Jinan 250061, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Univ Windsor, Dept Elect & Comp Engn, Windsor, ON N9B 3P4, Canada
基金
中国国家自然科学基金;
关键词
Feature extraction; Gait recognition; Legged locomotion; Data models; Analytical models; Convolutional neural networks; Convolution; convolutional neural network (CNN); partial relationship; different scale blocks; TRANSFORMATION MODEL; VIEW; ATTENTION;
D O I
10.1109/TCSVT.2021.3095290
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
At present, many studies have shown that partitioning the gait sequence and its feature map can improve the accuracy of gait recognition. However, most models just cut the feature map at a fixed single scale, which loses the dependence between various parts. So, our paper proposes a structure called Part Feature Relationship Extractor (PFRE) to discover all of the relationships between each parts for gait recognition. The paper uses PFRE and a Convolutional Neural Network (CNN) to form the RPNet. PFRE is divided into two parts. One part that we call the Total-Partial Feature Extractor (TPFE) is used to extract the features of different scale blocks, and the other part, called the Adjacent Feature Relation Extractor (AFRE), is used to find the relationships between each block. At the same time, the paper adjusts the number of input frames during training to perform quantitative experiments and finds the rule between the number of input frames and the performance of the model. Our model is tested on three public gait datasets, CASIA-B, OU-LP and OU-MVLP. It exhibits a significant level of robustness to occlusion situations, and achieves accuracies of 92.82% and 80.26% on CASIA-B under BG # and CL # conditions, respectively. The results show that our method reaches the top level among state-of-the-art methods.
引用
收藏
页码:2990 / 3000
页数:11
相关论文
共 49 条
[21]   The OU-ISIR Gait Database Comprising the Large Population Dataset and Performance Evaluation of Gait Recognition [J].
Iwama, Haruyuki ;
Okumura, Mayu ;
Makihara, Yasushi ;
Yagi, Yasushi .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2012, 7 (05) :1511-1521
[22]   Towards view-invariant gait modeling: Computing view-normalized body part trajectories [J].
Jean, Frederic ;
Albu, Alexandra Branzan ;
Bergevin, Robert .
PATTERN RECOGNITION, 2009, 42 (11) :2936-2949
[23]  
Kingma DP, 2014, ADV NEUR IN, V27
[24]   Gait analysis in forensic medicine [J].
Larsen, Peter K. ;
Simonsen, Erik B. ;
Lynnerup, Niels .
JOURNAL OF FORENSIC SCIENCES, 2008, 53 (05) :1149-1153
[25]   Diversity Regularized Spatiotemporal Attention for Video-based Person Re-identification [J].
Li, Shuang ;
Bak, Slawomir ;
Carr, Peter ;
Wang, Xiaogang .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :369-378
[26]   Harmonious Attention Network for Person Re-Identification [J].
Li, Wei ;
Zhu, Xiatian ;
Gong, Shaogang .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :2285-2294
[27]   Gait recognition based on joint distribution of motion angles [J].
Lu, Wei ;
Zong, Wei ;
Xing, Weiwei ;
Bao, Ergude .
JOURNAL OF VISUAL LANGUAGES AND COMPUTING, 2014, 25 (06) :754-763
[28]   Bag of Tricks and A Strong Baseline for Deep Person Re-identification [J].
Luo, Hao ;
Gu, Youzhi ;
Liao, Xingyu ;
Lai, Shenqi ;
Jiang, Wei .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, :1487-1495
[29]   View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition [J].
Muramatsu, Daigo ;
Makihara, Yasushi ;
Yagi, Yasushi .
IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (07) :1602-1615
[30]   Cross-view gait recognition by fusion of multiple transformation consistency measures [J].
Muramatsu, Daigo ;
Makihara, Yasushi ;
Yagi, Yasushi .
IET BIOMETRICS, 2015, 4 (02) :62-73