GaitSet: Cross-View Gait Recognition Through Utilizing Gait As a Deep Set

被引:125
作者
Chao, Hanqing [1 ]
Wang, Kun [1 ]
He, Yiwei [1 ]
Zhang, Junping [1 ]
Feng, Jianfeng [2 ]
机构
[1] Fudan Univ, Sch Comp Sci, Shanghai Key Lab Intelligent Informat Proc, Shanghai 200438, Peoples R China
[2] Fudan Univ, Inst Sci & Technol Brain Inspired Intelligence, Shanghai 200438, Peoples R China
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Gait recognition; Feature extraction; Three-dimensional displays; Legged locomotion; Deep learning; Pipelines; Data mining; biometric authentication; GaitSet; deep learning;
D O I
10.1109/TPAMI.2021.3057879
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Gait is a unique biometric feature that can be recognized at a distance; thus, it has broad applications in crime prevention, forensic identification, and social security. To portray a gait, existing gait recognition methods utilize either a gait template which makes it difficult to preserve temporal information, or a gait sequence that maintains unnecessary sequential constraints and thus loses the flexibility of gait recognition. In this paper, we present a novel perspective that utilizes gait as a deep set, which means that a set of gait frames are integrated by a global-local fused deep network inspired by the way our left- and right-hemisphere processes information to learn information that can be used in identification. Based on this deep set perspective, our method is immune to frame permutations, and can naturally integrate frames from different videos that have been acquired under different scenarios, such as diverse viewing angles, different clothes, or different item-carrying conditions. Experiments show that under normal walking conditions, our single-model method achieves an average rank-1 accuracy of 96.1 percent on the CASIA-B gait dataset and an accuracy of 87.9 percent on the OU-MVLP gait dataset. Under various complex scenarios, our model also exhibits a high level of robustness. It achieves accuracies of 90.8 and 70.3 percent on CASIA-B under bag-carrying and coat-wearing walking conditions respectively, significantly outperforming the best existing methods. Moreover, the proposed method maintains a satisfactory accuracy even when only small numbers of frames are available in the test samples; for example, it achieves 85.0 percent on CASIA-B even when using only 7 frames. The source code has been released at https://github.com/AbnerHqC/GaitSet.
引用
收藏
页码:3467 / 3478
页数:12
相关论文
共 50 条
[31]   Cross-View Gait Recognition Based on Dual-Stream Network [J].
Zhao, Xiaoyan ;
Zhang, Wenjing ;
Zhang, Tianyao ;
Zhang, Zhaohui .
JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2021, 22 (05) :671-678
[32]   GaitDAN: Cross-View Gait Recognition via Adversarial Domain Adaptation [J].
Huang, Tianhuan ;
Ben, Xianye ;
Gong, Chen ;
Xu, Wenzheng ;
Wu, Qiang ;
Zhou, Hongchao .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (09) :8026-8040
[33]   Cross-view gait recognition based on residual long short-term memory [J].
Wen, Junqin ;
Wang, Xiuhui .
MULTIMEDIA TOOLS AND APPLICATIONS, 2021, 80 (19) :28777-28788
[34]   Cross-view gait recognition based on residual long short-term memory [J].
Junqin Wen ;
Xiuhui Wang .
Multimedia Tools and Applications, 2021, 80 :28777-28788
[35]   Gait Recognition via Gait Period Set [J].
Wang, Runsheng ;
Shi, Yuxuan ;
Ling, Hefei ;
Li, Zongyi ;
Li, Ping ;
Liu, Boyuan ;
Zheng, Hanqing ;
Wang, Qian .
IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2023, 5 (02) :183-195
[36]   Tokenization of Skeleton-based Transformer Model for Cross-View Gait Recognition [J].
Kawakami, Tatsuya ;
Ryu, Jegoon ;
Kamata, Sei-ichiro .
2024 IEEE 8TH INTERNATIONAL CONFERENCE ON SIGNAL AND IMAGE PROCESSING APPLICATIONS, ICSIPA, 2024,
[37]   Graph-optimized coupled discriminant projections for cross-view gait recognition [J].
Xu, Wanjiang .
APPLIED INTELLIGENCE, 2021, 51 (11) :8149-8161
[38]   Gait recognition by fusing direct cross-view matching scores for criminal investigation [J].
1600, Information Processing Society of Japan (05) :35-39
[39]   Multiview max-margin subspace learning for cross-view gait recognition [J].
Xu, Wanjiang ;
Zhu, Canyan ;
Wang, Ziou .
PATTERN RECOGNITION LETTERS, 2018, 107 :75-82
[40]   Graph-optimized coupled discriminant projections for cross-view gait recognition [J].
Wanjiang Xu .
Applied Intelligence, 2021, 51 :8149-8161