RPNet: Gait Recognition With Relationships Between Each Body-Parts

被引:27
作者
Qin, Hao [1 ]
Chen, Zhenxue [1 ,2 ]
Guo, Qingqiang [1 ]
Wu, Q. M. Jonathan [3 ]
Lu, Mengxu [1 ]
机构
[1] Shandong Univ, Sch Control Sci & Engn, Jinan 250061, Peoples R China
[2] Xidian Univ, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Univ Windsor, Dept Elect & Comp Engn, Windsor, ON N9B 3P4, Canada
基金
中国国家自然科学基金;
关键词
Feature extraction; Gait recognition; Legged locomotion; Data models; Analytical models; Convolutional neural networks; Convolution; convolutional neural network (CNN); partial relationship; different scale blocks; TRANSFORMATION MODEL; VIEW; ATTENTION;
D O I
10.1109/TCSVT.2021.3095290
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
At present, many studies have shown that partitioning the gait sequence and its feature map can improve the accuracy of gait recognition. However, most models just cut the feature map at a fixed single scale, which loses the dependence between various parts. So, our paper proposes a structure called Part Feature Relationship Extractor (PFRE) to discover all of the relationships between each parts for gait recognition. The paper uses PFRE and a Convolutional Neural Network (CNN) to form the RPNet. PFRE is divided into two parts. One part that we call the Total-Partial Feature Extractor (TPFE) is used to extract the features of different scale blocks, and the other part, called the Adjacent Feature Relation Extractor (AFRE), is used to find the relationships between each block. At the same time, the paper adjusts the number of input frames during training to perform quantitative experiments and finds the rule between the number of input frames and the performance of the model. Our model is tested on three public gait datasets, CASIA-B, OU-LP and OU-MVLP. It exhibits a significant level of robustness to occlusion situations, and achieves accuracies of 92.82% and 80.26% on CASIA-B under BG # and CL # conditions, respectively. The results show that our method reaches the top level among state-of-the-art methods.
引用
收藏
页码:2990 / 3000
页数:11
相关论文
共 49 条
[31]  
Rijun Liao, 2017, Biometric Recognition. 12th Chinese Conference, CCBR 2017. Proceedings: LNCS 10568, P474, DOI 10.1007/978-3-319-69923-3_51
[32]   Pose-driven Deep Convolutional Model for Person Re-identification [J].
Su, Chi ;
Li, Jianing ;
Zhang, Shiliang ;
Xing, Junliang ;
Gao, Wen ;
Tian, Qi .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3980-3989
[33]   Beyond Part Models: Person Retrieval with Refined Part Pooling (and A Strong Convolutional Baseline) [J].
Sun, Yifan ;
Zheng, Liang ;
Yang, Yi ;
Tian, Qi ;
Wang, Shengjin .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :501-518
[34]   Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition [J].
Takemura N. ;
Makihara Y. ;
Muramatsu D. ;
Echigo T. ;
Yagi Y. .
IPSJ Transactions on Computer Vision and Applications, 2018, 10 (01)
[35]   On Input/Output Architectures for Convolutional Neural Network-Based Cross-View Gait Recognition [J].
Takemura, Noriko ;
Makihara, Yasushi ;
Muramatsu, Daigo ;
Echigo, Tomio ;
Yagi, Yasushi .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2019, 29 (09) :2708-2719
[36]  
Vishwakarma DK, 2015, 2015 2ND INTERNATIONAL CONFERENCE ON COMPUTING FOR SUSTAINABLE GLOBAL DEVELOPMENT (INDIACOM), P336
[37]   A two-fold transformation model for human action recognition using decisive pose [J].
Vishwakarma, Dinesh Kumar .
COGNITIVE SYSTEMS RESEARCH, 2020, 61 (61) :1-13
[38]   A unified model for human activity recognition using spatial distribution of gradients and difference of Gaussian kernel [J].
Vishwakarma, Dinesh Kumar ;
Dhiman, Chhavi .
VISUAL COMPUTER, 2019, 35 (11) :1595-1613
[39]   Silhouette analysis-based gait recognition for human identification [J].
Wang, L ;
Tan, T ;
Ning, HZ ;
Hu, WM .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2003, 25 (12) :1505-1518
[40]  
Wolf T, 2016, IEEE IMAGE PROC, P4165, DOI 10.1109/ICIP.2016.7533144