UGaitNet: Multimodal Gait Recognition With Missing Input Modalities

被引:22
作者
Marin-Jimenez, Manuel J. [1 ]
Castro, Francisco M. [2 ]
Delgado-Escano, Ruben [2 ]
Kalogeiton, Vicky [3 ]
Guil, Nicolas [2 ]
机构
[1] Univ Cordoba, Dept Comp & Numer Anal, Cordoba 14011, Spain
[2] Univ Malaga, Dept Comp Architecture, Malaga 29016, Spain
[3] Ecole Polytech, Comp Sci Lab, F-91120 Palaiseau, France
关键词
Gait recognition; Optical imaging; Optical sensors; Videos; Training; Cameras; Computer architecture; Gait; multimodal; deep learning; biometrics; VIEW; REPRESENTATION; FUSION; IMAGE; MODEL;
D O I
10.1109/TIFS.2021.3132579
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Gait recognition systems typically rely solely on silhouettes for extracting gait signatures. Nevertheless, these approaches struggle with changes in body shape and dynamic backgrounds; a problem that can be alleviated by learning from multiple modalities. However, in many real-life systems some modalities can be missing, and therefore most existing multimodal frameworks fail to cope with missing modalities. To tackle this problem, in this work, we propose UGaitNet, a unifying framework for gait recognition, robust to missing modalities. UGaitNet handles and mingles various types and combinations of input modalities, i.e. pixel gray value, optical flow, depth maps, and silhouettes, while being camera agnostic. We evaluate UGaitNet on two public datasets for gait recognition: CASIA-B and TUM-GAID, and show that it obtains compact and state-of-the-art gait descriptors when leveraging multiple or missing modalities. Finally, we show that UGaitNet with optical flow and grayscale inputs achieves almost perfect (98.9%) recognition accuracy on CASIA-B (same-view "normal") and 100% on TUM-GAID ("ellapsed time"). Code will be available at https://github.com/avagait/ugaitnet
引用
收藏
页码:5452 / 5462
页数:11
相关论文
共 50 条
  • [31] Gait recognition by fluctuations
    Aqmar, Muhammad Rasyid
    Fujihara, Yusuke
    Makihara, Yasushi
    Yagi, Yasushi
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2014, 126 : 38 - 52
  • [32] Survey of Gait Recognition
    Liu, Ling-Feng
    Jia, Wei
    Zhu, Yi-Hai
    EMERGING INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS: WITH ASPECTS OF ARTIFICIAL INTELLIGENCE, 2009, 5755 : 652 - 659
  • [33] Synthesizing and Reconstructing Missing Sensory Modalities in Behavioral Context Recognition
    Saeed, Aaqib
    Ozcelebi, Tanir
    Lukkien, Johan
    SENSORS, 2018, 18 (09)
  • [34] Gait Pyramid Attention Network: Toward Silhouette Semantic Relation Learning for Gait Recognition
    Chen, Jianyu
    Wang, Zhongyuan
    Yi, Peng
    Zeng, Kangli
    He, Zheng
    Zou, Qin
    IEEE TRANSACTIONS ON BIOMETRICS, BEHAVIOR, AND IDENTITY SCIENCE, 2022, 4 (04): : 582 - 595
  • [35] Multimodal depression recognition based on gait and rating scale
    Liu, Xiaotong
    Ren, Min
    Hu, Xuecai
    Li, Qiong
    Huang, Yongzhen
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 278
  • [36] Multimodal features fusion for gait, gender and shoes recognition
    Francisco M. Castro
    Manuel J. Marín-Jiménez
    Nicolás Guil
    Machine Vision and Applications, 2016, 27 : 1213 - 1228
  • [37] Transformer-Based Multimodal Spatial-Temporal Fusion for Gait Recognition
    Zhang, Jikai
    Ji, Mengyu
    He, Yihao
    Guo, Dongliang
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2024, PT XV, 2025, 15045 : 494 - 507
  • [38] Multimodal Classification of Parkinson's Disease in Home Environments with Resiliency to Missing Modalities
    Heidarivincheh, Farnoosh
    McConville, Ryan
    Morgan, Catherine
    McNaney, Roisin
    Masullo, Alessandro
    Mirmehdi, Majid
    Whone, Alan L.
    Craddock, Ian
    SENSORS, 2021, 21 (12)
  • [39] Robust Multimodal Learning With Missing Modalities via Parameter-Efficient Adaptation
    Reza, Md Kaykobad
    Prater-Bennette, Ashley
    Asif, M. Salman
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (02) : 742 - 754
  • [40] Robust Multimodal Sentiment Analysis via Tag Encoding of Uncertain Missing Modalities
    Zeng, Jiandian
    Zhou, Jiantao
    Liu, Tianyi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6301 - 6314