Learning Gait Representations with Noisy Multi-Task Learning

被引:8
作者
Cosma, Adrian [1 ]
Radoi, Emilian [1 ]
机构
[1] Univ Politehn Bucuresti, Fac Automat Control & Comp Sci, Bucharest 006042, Romania
关键词
gait recognition; self-supervised learning; pose estimation; multi-task learning; weakly-supervised learning; OLDER-ADULTS; RECOGNITION; AGE; PERFORMANCE; PATTERNS; IMAGE;
D O I
10.3390/s22186803
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Gait analysis is proven to be a reliable way to perform person identification without relying on subject cooperation. Walking is a biometric that does not significantly change in short periods of time and can be regarded as unique to each person. So far, the study of gait analysis focused mostly on identification and demographics estimation, without considering many of the pedestrian attributes that appearance-based methods rely on. In this work, alongside gait-based person identification, we explore pedestrian attribute identification solely from movement patterns. We propose DenseGait, the largest dataset for pretraining gait analysis systems containing 217 K anonymized tracklets, annotated automatically with 42 appearance attributes. DenseGait is constructed by automatically processing video streams and offers the full array of gait covariates present in the real world. We make the dataset available to the research community. Additionally, we propose GaitFormer, a transformer-based model that after pretraining in a multi-task fashion on DenseGait, achieves 92.5% accuracy on CASIA-B and 85.33% on FVG, without utilizing any manually annotated data. This corresponds to a +14.2% and +9.67% accuracy increase compared to similar methods. Moreover, GaitFormer is able to accurately identify gender information and a multitude of appearance attributes utilizing only movement patterns. The code to reproduce the experiments is made publicly.
引用
收藏
页数:20
相关论文
共 79 条
[1]   Improving Gait Recognition with 3D Pose Estimation [J].
An, Weizhi ;
Liao, Rijun ;
Yu, Shiqi ;
Huang, Yongzhen ;
Yuen, Pong C. .
BIOMETRIC RECOGNITION, CCBR 2018, 2018, 10996 :137-147
[2]  
Bashir K., 2009, Gait recognition using gait entropy image, P1
[3]   Billion-Scale Pretraining with Vision Transformers for Multi-Task Visual Representations [J].
Beal, Josh ;
Wu, Hao-Yu ;
Park, Dong Huk ;
Zhai, Andrew ;
Kislyuk, Dmitry .
2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, :1431-1440
[4]  
Bewley A, 2016, IEEE IMAGE PROC, P3464, DOI 10.1109/ICIP.2016.7533003
[5]  
Brown TB, 2020, ADV NEUR IN, V33
[6]   Emerging Properties in Self-Supervised Vision Transformers [J].
Caron, Mathilde ;
Touvron, Hugo ;
Misra, Ishan ;
Jegou, Herve ;
Mairal, Julien ;
Bojanowski, Piotr ;
Joulin, Armand .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :9630-9640
[7]  
Catruna Andy, 2021, P 2021 16 IEEE INT C, P1
[8]  
Chao HQ, 2019, AAAI CONF ARTIF INTE, P8126
[9]  
Chen T, 2020, PR MACH LEARN RES, V119
[10]   Body fat-related differences in gait parameters and physical fitness level in weight-matched male adults [J].
Choi, Hyejung ;
Lim, Jongil ;
Lee, Sukho .
CLINICAL BIOMECHANICS, 2021, 81