Robust clothing-independent gait recognition using hybrid part-based gait features

被引:2
作者
Gao, Zhipeng [1 ]
Wu, Junyi [1 ]
Wu, Tingting [1 ]
Huang, Renyu [1 ]
Zhang, Anguo [2 ,3 ]
Zhao, Jianqiang [1 ]
机构
[1] Xiamen Meiya Pico Informat Co Ltd, Xiamen, Fujian, Peoples R China
[2] Minjiang Univ, Coll Math & Data Sci, Fuzhou, Peoples R China
[3] Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou, Peoples R China
关键词
Gait recognition; Part-based; Spatio-temporal feature learning; Clothing-independent; IDENTIFICATION; WALKING;
D O I
10.7717/peerj-cs.996
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, gait has been gathering extensive interest for the non-fungible position in applications. Although various methods have been proposed for gait recognition, most of them can only attain an excellent recognition performance when the probe and gallery gaits are in a similar condition. Once external factors (e.g., clothing variations) influence people's gaits and changes happen in human appearances, a significant performance degradation occurs. Hence, in our article, a robust hybrid part-based spatio-temporal feature learning method is proposed for gait recognition to handle this cloth-changing problem. First, human bodies are segmented into the affected and non/less unaffected parts based on the anatomical studies. Then, a well-designed network is proposed in our method to formulate our required hybrid features from the non/less unaffected body parts. This network contains three sub-networks, aiming to generate features independently. Each sub-network emphasizes individual aspects of gait, hence an effective hybrid gait feature can be created through their concatenation. In addition, temporal information can be used as complement to enhance the recognition performance, a sub-network is specifically proposed to establish the temporal relationship between consecutive short-range frames. Also, since local features are more discriminative than global features in gait recognition, in this network a sub-network is specifically proposed to generate features of local refined differences. The effectiveness of our proposed method has been evaluated by experiments on the CASIA Gait Dataset B and OU-ISIR Treadmill Gait Dataset B. Related experiments illustrate that compared with other gait recognition methods, our proposed method can achieve a prominent result when handling this cloth-changing gait recognition problem.
引用
收藏
页数:23
相关论文
共 60 条
[1]  
[Anonymous], 2016, IEEE T PATTERN ANAL
[2]   Clothing invariant human gait recognition using modified local optimal oriented pattern binary descriptor [J].
Anusha, R. ;
Jaidhar, C. D. .
MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (3-4) :2873-2896
[3]   TGLSTM: A time based graph deep learning approach to gait recognition [J].
Battistone, Francesco ;
Petrosino, Alfredo .
PATTERN RECOGNITION LETTERS, 2019, 126 :132-138
[4]   On Using Gait in Forensic Biometrics [J].
Bouchrika, Imed ;
Goffredo, Michaela ;
Carter, John ;
Nixon, Mark .
JOURNAL OF FORENSIC SCIENCES, 2011, 56 (04) :882-889
[5]  
Chao H., 2019, AAAI
[6]   Multi-Gait Recognition Based on Attribute Discovery [J].
Chen, Xin ;
Weng, Jian ;
Lu, Wei ;
Xu, Jiaming .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (07) :1697-1710
[7]   PROPERTIES OF BODY SEGMENTS BASED ON SIZE AND WEIGHT [J].
DEMPSTER, WT ;
GAUGHRAN, GR .
AMERICAN JOURNAL OF ANATOMY, 1967, 120 (01) :33-&
[8]   Gait Recognition Under Different Clothing Conditions Via Deterministic Learning [J].
Deng, Muqing ;
Wang, Cong .
IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2024, 11 (06) :1530-1532
[9]   GaitPart: Temporal Part-based Model for Gait Recognition [J].
Fan, Chao ;
Peng, Yunjie ;
Cao, Chunshui ;
Liu, Xu ;
Hou, Saihui ;
Chi, Jiannan ;
Huang, Yongzhen ;
Li, Qing ;
He, Zhiqiang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :14213-14221
[10]   SlowFast Networks for Video Recognition [J].
Feichtenhofer, Christoph ;
Fan, Haoqi ;
Malik, Jitendra ;
He, Kaiming .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :6201-6210