GaitSlice: A gait recognition model based on spatio-temporal slice features

被引:0
|
作者
Li, Huakang [1 ,2 ]
Qiu, Yidan [3 ]
Zhao, Huimin [1 ,2 ]
Zhan, Jin [1 ,2 ]
Chen, Rongjun [1 ,2 ]
Wei, Tuanjie [1 ,2 ]
Huang, Zhihui [1 ,2 ]
机构
[1] School of Computer Science, Pattern Recognition and Intelligent System Laboratory, Guangdong Polytechnic Normal University, Guangzhou,510665, China
[2] Guangdong Key Laboratory of Intellectual Property and Big Data, Guangzhou,510665, China
[3] School of Psychology, Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Center for the Study of Applied Psychology, Key Laboratory of Mental Health and Cognitive Science of Guangdong Province,
基金
中国国家自然科学基金;
关键词
Pattern recognition - Semantics;
D O I
暂无
中图分类号
Q66 [生物力学]; Q811 [仿生学]; Q692 [];
学科分类号
1111 ;
摘要
Improving the performance of gait recognition under multiple camera views (i.e., cross-view gait recognition) and various conditions is urgent. From observation, we find that adjacent body parts are inter-related while walking, and each frame in a gait sequence possesses different degrees of semantic information. In this paper, we propose a novel model, GaitSlice, to analyze the human gait based on spatio-temporal slice features. Spatially, we design Slice Extraction Device (SED) to form top-down inter-related slice features. Temporally, we introduce Residual Frame Attention Mechanism (RFAM) to acquire and highlight the key frames. To better simulate reality, GaitSlice combines parallel RFAMs with inter-related slice features to focus on the features’ spatio-temporal information. We evaluate our model on CASIA-B and OU-MVLP gait datasets and compare it with six typical gait recognition models by using rank-1 accuracy. The results show that GaitSlice achieves high accuracy in gait recognition under cross-view and various walking conditions. © 2021 Elsevier Ltd
引用
收藏
相关论文
共 50 条
  • [11] Joint model of gradient magnitude and Gabor features via Spatio-Temporal slice
    Bediako, Daniel Oppong
    Mou, Xuanqin
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2021, 79
  • [12] Action recognition using spatio-temporal regularity based features
    Goodhart, Taylor
    Yan, Pingkun
    Shah, Mubarak
    2008 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-12, 2008, : 745 - 748
  • [13] Associated Spatio-Temporal Capsule Network for Gait Recognition
    Zhao, Aite
    Dong, Junyu
    Li, Jianbo
    Qi, Lin
    Zhou, Huiyu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 846 - 860
  • [14] Hierarchical Spatio-Temporal Representation Learning for Gait Recognition
    Wang, Lei
    Liu, Bo
    Liang, Fangfang
    Wang, Bincheng
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19582 - 19592
  • [15] A spatio-temporal integrated model based on local and global features for video expression recognition
    Min Hu
    Peng Ge
    Xiaohua Wang
    Hui Lin
    Fuji Ren
    The Visual Computer, 2022, 38 : 2617 - 2634
  • [16] A spatio-temporal integrated model based on local and global features for video expression recognition
    Hu, Min
    Ge, Peng
    Wang, Xiaohua
    Lin, Hui
    Ren, Fuji
    VISUAL COMPUTER, 2022, 38 (08): : 2617 - 2634
  • [17] Silhouette spatio-temporal spectrum (SStS) for gait-based human recognition
    Lam, THW
    Ieong, TWHA
    Lee, RST
    PATTERN RECOGNITION AND IMAGE ANALYSIS, PT 2, PROCEEDINGS, 2005, 3687 : 309 - 315
  • [18] Study of human action recognition based on improved spatio-temporal features
    Ji X.-F.
    Wu Q.-Q.
    Ju Z.-J.
    Wang Y.-Y.
    International Journal of Automation and Computing, 2014, 11 (05) : 500 - 509
  • [19] A fast human action recognition network based on spatio-temporal features
    Xu, Jie
    Song, Rui
    Wei, Haoliang
    Guo, Jinhong
    Zhou, Yifei
    Huang, Xiwei
    NEUROCOMPUTING, 2021, 441 : 350 - 358
  • [20] Study of Human Action Recognition Based on Improved Spatio-temporal Features
    XiaoFei Ji
    QianQian Wu
    ZhaoJie Ju
    YangYang Wang
    International Journal of Automation & Computing, 2014, 11 (05) : 500 - 509