GaitAMR: Cross-view gait recognition via aggregated multi-feature representation

被引:21
作者
Chen, Jianyu [1 ,4 ]
Wang, Zhongyuan [1 ]
Zheng, Caixia [2 ]
Zeng, Kangli [1 ]
Zou, Qin [1 ]
Cui, Laizhong [3 ,4 ]
机构
[1] Wuhan Univ, Natl Engn Res Ctr Multimedia Software, Sch Comp Sci, Wuhan 430072, Peoples R China
[2] Northeast Normal Univ, Key Lab Appl Stat MOE, Changchun 130117, Peoples R China
[3] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Peoples R China
[4] China & Guangdong Lab Artificial Intelligence & Di, Shenzhen 518060, Peoples R China
基金
中国国家自然科学基金;
关键词
Gait recognition; Deep learning; Multi-feature representation; Spatiotemporal features; Cross-view task; IMAGE;
D O I
10.1016/j.ins.2023.03.145
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Gait recognition is an emerging long-distance biometric technology applied in many fields, including video surveillance. The most recent gait recognition methods treat human silhouettes as global or local regions to extract the gait properties. However, the global approach may cause the fine-grained differences of limbs to be ignored, whereas the local approach focuses only on the details of body parts and cannot consider the correlation between adjacent regions. Moreover, as a multi-view task, view changes have a significant impact on the integrity of the silhouette, which necessitates considering the disturbances brought about by the view itself. To address these problems, this paper proposes a novel gait recognition framework, namely, gait aggregation multi -feature representation (GaitAMR), to extract the most discriminative subject features. In GaitAMR, we propose a holistic and partial temporal aggregation strategy, that extracts body movement descriptors, both globally and locally. Besides, we use the optimal view features as supplementary information for spatiotemporal features, and thus enhance the view stability in the recognition process. By effectively aggregating feature representations from different domains, our method enhances the discrimination of gait patterns between subjects. Experimental results on public gait datasets show that GaitAMR improves gait recognition in occlusion conditions, outperforming state-of-the-art methods.
引用
收藏
页数:16
相关论文
共 48 条
[41]   Enhancing gait based person identification using joint sparsity model and l1-norm minimization [J].
Yogarajah, Pratheepan ;
Chaurasia, Priyanka ;
Condell, Joan ;
Prasad, Girijesh .
INFORMATION SCIENCES, 2015, 308 :3-22
[42]  
Yu SQ, 2006, INT C PATT RECOG, P441
[43]   Classification of neurodegenerative diseases using gait dynamics via deterministic learning [J].
Zeng, Wei ;
Wang, Cong .
INFORMATION SCIENCES, 2015, 317 :246-258
[44]   Human Pose Estimation in Videos [J].
Zhang, Dong ;
Shah, Mubarak .
2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, :2012-2020
[45]   Cross-View Gait Recognition by Discriminative Feature Learning [J].
Zhang, Yuqi ;
Huang, Yongzhen ;
Yu, Shiqi ;
Wang, Liang .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :1001-1015
[46]  
Zhang Z., 2020, Int. J.Prod. Res, P1, DOI [DOI 10.1080/00207543.2020.1804639, DOI 10.1080/10494820.2020.1723113]
[47]   On Learning Disentangled Representations for Gait Recognition [J].
Zhang, Ziyuan ;
Tran, Luan ;
Liu, Feng ;
Liu, Xiaoming .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) :345-360
[48]   Gait Recognition via Disentangled Representation Learning [J].
Zhang, Ziyuan ;
Tran, Luan ;
Yin, Xi ;
Atoum, Yousef ;
Liu, Xiaoming ;
Wan, Jian ;
Wang, Nanxin .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4705-4714