Integral Pose Learning via Appearance Transfer for Gait Recognition

被引:3
作者
Huang, Panjian [1 ]
Hou, Saihui [1 ]
Cao, Chunshui [2 ]
Liu, Xu [2 ]
Hu, Xuecai [1 ]
Huang, Yongzhen [1 ]
机构
[1] Beijing Normal Univ, Sch Artificial Intelligence, Beijing 100875, Peoples R China
[2] Watrix Technol Ltd Co Ltd, Beijing 100088, Peoples R China
基金
中国国家自然科学基金;
关键词
Integral pose; appearance transfer; gait recognition; disentangling representation learning;
D O I
10.1109/TIFS.2024.3382606
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Gait recognition plays an important role in video surveillance and security by identifying humans based on their unique walking patterns. The existing gait recognition methods have achieved competitive accuracy with shape and motion patterns under limited-covariate conditions. However, when extreme appearance changes distort discriminative features, gait recognition yields unsatisfactory results under cross-covariate conditions. In this work, we first indicate that the integral pose in each silhouette maintains an appearance-unrelated discriminative identity. However, the monotonous appearance variables in a gait database cause gait models to have difficulty extracting integral poses. Therefore, we propose an Appearance-transferable Disentangling and Generative Network (GaitApp) to generate gait silhouettes with rich appearances and invariant poses. Specifically, GaitApp leverages multi-branch cooperation to disentangle pose features and appearance features, and transfers the appearance information from one subject to another. By simulating a person constantly changing appearances under limited-covariate conditions, downstream models enable to extract discriminative integral pose features. Extensive experiments demonstrate that our method allows representative gait models to stand at a new altitude, further promoting the exploration to cross-covariate gait recognition. All the code is available at https://github.com/Hpjhpjhs/GaitApp.git
引用
收藏
页码:4716 / 4727
页数:12
相关论文
共 50 条
[21]   Rethinking Appearance-Based Deep Gait Recognition: Reviews, Analysis, and Insights From Gait Recognition Evolution [J].
Li, Jingqi ;
Zhang, Yuzhen ;
Zeng, Yi ;
Ye, Changxin ;
Xu, Wenzheng ;
Ben, Xianye ;
Wang, Fei-Yue ;
Zhang, Junping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (06) :9777-9797
[22]   Subspace ensemble learning via totally-corrective boosting for gait recognition [J].
Ma, Guangkai ;
Wang, Yan ;
Wu, Ligang .
NEUROCOMPUTING, 2017, 224 :119-127
[23]   Kinect-based Gait Recognition System Design Via Deterministic Learning [J].
Cheng Fengjiang ;
Deng Muqing ;
Wang Cong .
2017 29TH CHINESE CONTROL AND DECISION CONFERENCE (CCDC), 2017, :5916-5921
[24]   Gait Recognition via Motion Difference Representation Learning and Salient Feature Modeling [J].
Huo, Wei ;
Wang, Ke ;
Tang, Jun ;
Wang, Nian ;
Liang, Dong .
IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2025,
[25]   Distributed edge computing for appearance-based gait recognition [J].
Conchari, Christian ;
Sahonero-Alvarez, Guillermo ;
Mollocuaquira, Raul ;
Salazar, Edgar .
2024 IEEE ANDESCON, 2024,
[26]   A robust covariate-invariant gait recognition based on pose features [J].
Parashar, Anubha ;
Parashar, Apoorva ;
Shekhawat, Rajveer Singh .
IET BIOMETRICS, 2022, 11 (06) :601-613
[27]   DensePoseGait: Dense Human Pose Part-Guided for Gait Recognition [J].
Liao, Rijun ;
Li, Zhu ;
Bhattacharyya, Shuvra S. ;
York, George .
IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2025, 7 (01) :33-46
[28]   Cut Out the Middleman: Revisiting Pose-Based Gait Recognition [J].
Fu, Yang ;
Hou, Saihui ;
Meng, Shibei ;
Hu, Xuecai ;
Cao, Chunshui ;
Liu, Xu ;
Huang, Yongzhen .
COMPUTER VISION - ECCV 2024, PT XXXI, 2025, 15089 :112-128
[29]   Uncooperative gait recognition by learning to rank [J].
Martin-Felez, Raul ;
Xiang, Tao .
PATTERN RECOGNITION, 2014, 47 (12) :3793-3806
[30]   On Learning Disentangled Representations for Gait Recognition [J].
Zhang, Ziyuan ;
Tran, Luan ;
Liu, Feng ;
Liu, Xiaoming .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) :345-360