GaitDAN: Cross-View Gait Recognition via Adversarial Domain Adaptation

被引:1
|
作者
Huang, Tianhuan [1 ]
Ben, Xianye [1 ]
Gong, Chen [2 ]
Xu, Wenzheng [1 ]
Wu, Qiang [3 ]
Zhou, Hongchao [1 ]
机构
[1] Shandong Univ, Sch Informat Sci & Engn, Qingdao 266237, Peoples R China
[2] Nanjing Univ Sci & Technol, Minist Educ, Sch Comp Sci & Engn, Key Lab Intelligent Percept & Syst High Dimens In, Nanjing 210094, Peoples R China
[3] Univ Technol Sydney, Sch Elect & Data Engn, Sydney, NSW 2007, Australia
关键词
Gait recognition; hierarchical feature aggregation; adversarial view-change elimination; adversarial domain adaptation;
D O I
10.1109/TCSVT.2024.3384308
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
View change causes significant differences in the gait appearance. Consequently, recognizing gait in cross-view scenarios is highly challenging. Most recent approaches either convert the gait from the original view to the target view before recognition is carried out or extract the gait feature irrelevant to the camera view through either brute force learning or decouple learning. However, these approaches have many constraints, such as the difficulty of handling unknown camera views. This work treats the view-change issue as a domain-change issue and proposes to tackle this problem through adversarial domain adaptation. This way, gait information from different views is regarded as the data from different sub-domains. The proposed approach focuses on adapting the gait feature differences caused by such sub-domain change and, at the same time, maintaining sufficient discriminability across the different people. For this purpose, a Hierarchical Feature Aggregation (HFA) strategy is proposed for discriminative feature extraction. By incorporating HFA, the feature extractor can well aggregate the spatial-temporal feature across the various stages of the network and thereby comprehensive gait features can be obtained. Then, an Adversarial View-change Elimination (AVE) module equipped with a set of explicit models for recognizing the different gait viewpoints is proposed. Through the adversarial learning process, AVE would not be able to identify the gait viewpoint in the end, given the gait features generated by the feature extractor. That is, the adversarial domain adaptation mitigates the view change factor, and discriminative gait features that are compatible with all sub-domains are effectively extracted. Extensive experiments on three of the most popular public datasets, CASIA-B, OULP, and OUMVLP richly demonstrate the effectiveness of our approach.
引用
收藏
页码:8026 / 8040
页数:15
相关论文
共 50 条
  • [31] Cross-View Adaptation Network for Cross-Domain Relation Extraction
    Yan, Bo
    Zhang, Dongmei
    Wang, Huadong
    Wu, Chunhua
    CHINESE COMPUTATIONAL LINGUISTICS, CCL 2019, 2019, 11856 : 306 - 317
  • [32] Quality-dependent View Transformation Model for Cross-view Gait Recognition
    Muramatsu, Daigo
    Makihara, Yasushi
    Yagi, Yasushi
    2013 IEEE SIXTH INTERNATIONAL CONFERENCE ON BIOMETRICS: THEORY, APPLICATIONS AND SYSTEMS (BTAS), 2013,
  • [33] Cross-View Gait Recognition Using Pairwise Spatial Transformer Networks
    Xu, Chi
    Makihara, Yasushi
    Li, Xiang
    Yagi, Yasushi
    Lu, Jianfeng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (01) : 260 - 274
  • [34] Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition
    Huang, Tianhuan
    Ben, Xianye
    Gong, Chen
    Zhang, Baochang
    Yan, Rui
    Wu, Qiang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (10) : 6967 - 6980
  • [35] Cross-View Gait Recognition Based on Dual-Stream Network
    Zhao, Xiaoyan
    Zhang, Wenjing
    Zhang, Tianyao
    Zhang, Zhaohui
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2021, 22 (05) : 671 - 678
  • [36] Cross-view gait recognition by fusion of multiple transformation consistency measures
    Muramatsu, Daigo
    Makihara, Yasushi
    Yagi, Yasushi
    IET BIOMETRICS, 2015, 4 (02) : 62 - 73
  • [37] Beyond view transformation: feature distribution consistent GANs for cross-view gait recognition
    Wang, Yu
    Xia, Yi
    Zhang, Yongliang
    VISUAL COMPUTER, 2022, 38 (06): : 1915 - 1928
  • [38] Beyond view transformation: feature distribution consistent GANs for cross-view gait recognition
    Yu Wang
    Yi Xia
    Yongliang Zhang
    The Visual Computer, 2022, 38 : 1915 - 1928
  • [39] Cross-view Image Generation via Mixture Generative Adversarial Network
    Wei X.
    Li J.
    Sun X.
    Liu S.-F.
    Lu Y.
    Zidonghua Xuebao/Acta Automatica Sinica, 2021, 47 (11): : 2623 - 2636
  • [40] Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition
    Takemura N.
    Makihara Y.
    Muramatsu D.
    Echigo T.
    Yagi Y.
    IPSJ Transactions on Computer Vision and Applications, 2018, 10 (01)