Spatio-Temporal Fusion Gait Recognition Method Combining Silhouette and Pose

被引:0
作者
Zhang, Chaoyue [1 ]
Zhang, Rong [1 ]
机构
[1] Faculty of Electrical Engineering and Computer Science, Ningbo University, Zhejiang, Ningbo
关键词
attention; gait recognition; multi-modality; multi-scale; occlusion resistance;
D O I
10.3778/j.issn.1002-8331.2204-0500
中图分类号
Q66 [生物力学]; Q811 [仿生学]; Q692 [];
学科分类号
1111 ;
摘要
Most of the existing gait recognition methods are contour-based gait recognition methods, however, contours are easily affected by occlusion, resulting in a decrease in recognition accuracy. In real monitoring scenarios, occlusion is almost inevitable, and improving the accuracy of gait recognition under occlusion is the premise that the algorithm can land in practical applications. Aiming at this problem, a spatio-temporal fusion gait recognition method combining silhouette and pose is proposed. Using the ability of pose to resist occlusion, a multi-modality spatial feature fusion module is designed, and the feature reuse strategy and modal fusion strategy are used to improve the information capacity of spatial features. A multi-scale temporal feature extraction module is designed to extract temporal information at different time scales using independent branches, and an attention-based feature fusion strategy is proposed to integrate temporal information adaptively. A spatial feature set branch is designed to improve the representation of spatial-temporal features in a deeply supervised manner. Experimental results on publicly available datasets show the effectiveness of the proposed method, and the model has good robustness under occlusion. © 2023 Journal of Computer Engineering and Applications Beijing Co., Ltd.; Science Press. All rights reserved.
引用
收藏
页码:135 / 142
页数:7
相关论文
共 18 条
[1]  
CHAO H, WANG K, HE Y, Et al., GaitSet:cross-view gait recognition through utilizing gait as a deep set[J], IEEE Transactions on Pattern Analysis and Machine Intelligence, 44, 7, pp. 3467-3478, (2022)
[2]  
FAN C, PENG Y, CAO C, Et al., GaitPart:temporal part-based model for gait recognition[C], Proceedings of the 2000 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14225-14233, (2020)
[3]  
HOU S, LIU X, CAO C, Et al., Set residual network for silhouette-based gait recognition[J], IEEE Transactions on Biometrics Behavior and Identity Science, 3, 3, pp. 384-393, (2021)
[4]  
IWAMA H, MURAMATSU D, MAKIHARA Y, Et al., Gait verification system for criminal investigation[J], IPSJ Transactions on Computer Vision and Applications, 5, pp. 163-175, (2013)
[5]  
LYNNERUP N, LARSEN P K., Gait as evidence[J], IET Biometrics, 3, 2, pp. 47-54, (2014)
[6]  
ZMA B, HFM A, IB B, Et al., Investigating the use of motion-based features from optical flow for gait recognitionScienceDirect, Neurocomputing, 283, pp. 140-149, (2018)
[7]  
ZHANG Y, HUANG Y, YU S, Et al., Cross-view gait recognition by discriminative feature learning[J], IEEE Transactions on Image Processing, 29, pp. 1001-1015, (2019)
[8]  
LIAO R, CAO C, GARCIA E B, Et al., Pose-based temporal-spatial network (PTSN) for gait recognition with carrying and clothing variations[C], Proceedings of the 12th Chinese Conference on Biometric Recognition, pp. 474-483, (2017)
[9]  
TEEPE T, KHAN A, GILG J, Et al., GaitGraph:graph convolutional network for skeleton-based gait recognition[C], Proceedings of the 2021 IEEE International Conference on Image Processing, pp. 2314-2318, (2021)
[10]  
YU S, TAN D, TAN T., A framework for evaluating the effect of view angle, clothing and carrying condition on gait recognition[C], Proceedings of the 18th International Conference on Pattern Recognition, 4, pp. 441-444, (2006)