GaitSet: Cross-View Gait Recognition Through Utilizing Gait As a Deep Set

被引:125
作者
Chao, Hanqing [1 ]
Wang, Kun [1 ]
He, Yiwei [1 ]
Zhang, Junping [1 ]
Feng, Jianfeng [2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200438, Peoples R China
[2] Fudan Univ, Inst Sci & Technol Brain Inspired Intelligence, Shanghai 200438, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Gait recognition; Feature extraction; Three-dimensional displays; Legged locomotion; Deep learning; Pipelines; Data mining; biometric authentication; GaitSet; deep learning;
D O I
10.1109/TPAMI.2021.3057879
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Gait is a unique biometric feature that can be recognized at a distance; thus, it has broad applications in crime prevention, forensic identification, and social security. To portray a gait, existing gait recognition methods utilize either a gait template which makes it difficult to preserve temporal information, or a gait sequence that maintains unnecessary sequential constraints and thus loses the flexibility of gait recognition. In this paper, we present a novel perspective that utilizes gait as a deep set, which means that a set of gait frames are integrated by a global-local fused deep network inspired by the way our left- and right-hemisphere processes information to learn information that can be used in identification. Based on this deep set perspective, our method is immune to frame permutations, and can naturally integrate frames from different videos that have been acquired under different scenarios, such as diverse viewing angles, different clothes, or different item-carrying conditions. Experiments show that under normal walking conditions, our single-model method achieves an average rank-1 accuracy of 96.1 percent on the CASIA-B gait dataset and an accuracy of 87.9 percent on the OU-MVLP gait dataset. Under various complex scenarios, our model also exhibits a high level of robustness. It achieves accuracies of 90.8 and 70.3 percent on CASIA-B under bag-carrying and coat-wearing walking conditions respectively, significantly outperforming the best existing methods. Moreover, the proposed method maintains a satisfactory accuracy even when only small numbers of frames are available in the test samples; for example, it achieves 85.0 percent on CASIA-B even when using only 7 frames. The source code has been released at https://github.com/AbnerHqC/GaitSet.
引用
收藏
页码:3467 / 3478
页数:12
相关论文
共 50 条
[21]   Cross-View Gait Recognition Based on U-Net [J].
Tifiini Alvarez, Israel Raul ;
Sahonero-Alvarez, Guillermo .
2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
[22]   Attention-Based Network for Cross-View Gait Recognition [J].
Huang, Yuanyuan ;
Zhang, Jianfu ;
Zhao, Haohua ;
Zhang, Liqing .
NEURAL INFORMATION PROCESSING (ICONIP 2018), PT VII, 2018, 11307 :489-498
[23]   Cross-view gait recognition based on human walking trajectory [J].
Chen, Xian ;
Yang, Tianqi ;
Xu, Jiaming .
JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2014, 25 (08) :1842-1855
[24]   Research on Gait Recognition Algorithm Based on Optimized GaitSet [J].
Li, Jianfang .
Computer Engineering and Applications, 2025, 61 (14) :256-263
[25]   A Comprehensive Study on Cross-View Gait Based Human Identification with Deep CNNs [J].
Wu, Zifeng ;
Huang, Yongzhen ;
Wang, Liang ;
Wang, Xiaogang ;
Tan, Tieniu .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (02) :209-226
[26]   Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition [J].
Takemura N. ;
Makihara Y. ;
Muramatsu D. ;
Echigo T. ;
Yagi Y. .
IPSJ Transactions on Computer Vision and Applications, 2018, 10 (01)
[27]   View Transformation Model Incorporating Quality Measures for Cross-View Gait Recognition [J].
Muramatsu, Daigo ;
Makihara, Yasushi ;
Yagi, Yasushi .
IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (07) :1602-1615
[28]   Cross-View Gait Recognition Method Based on Multi-branch Residual Deep Network [J].
Hu S. ;
Wang X. ;
Liu Y. .
Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2021, 34 (05) :455-462
[29]   Cross-View Gait Recognition Using Pairwise Spatial Transformer Networks [J].
Xu, Chi ;
Makihara, Yasushi ;
Li, Xiang ;
Yagi, Yasushi ;
Lu, Jianfeng .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (01) :260-274
[30]   Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition [J].
Huang, Tianhuan ;
Ben, Xianye ;
Gong, Chen ;
Zhang, Baochang ;
Yan, Rui ;
Wu, Qiang .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) :6967-6980