Multi-View Large Population Gait Database With Human Meshes and Its Performance Evaluation

被引:20
|
作者
Li, Xiang [1 ]
Makihara, Yasushi [1 ]
Xu, Chi [1 ]
Yagi, Yasushi [1 ]
机构
[1] Osaka Univ, Dept Intelligent Media, SANKEN, Suita, Osaka 5650871, Japan
基金
日本学术振兴会;
关键词
Asynchronous multi-view sequences; gait database; gait recognition; three-dimensional human pose/shape estimation; PERSON RECOGNITION; VIEW; EXTRACTION; MODEL;
D O I
10.1109/TBIOM.2022.3174559
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing model-based gait databases provide the 2D poses (i.e., joint locations) extracted by general pose estimators as the human model. However, these 2D poses suffer from information loss and are of relatively low quality. In this paper, we consider a more informative 3D human mesh model with parametric pose and shape features, and propose a multi-view training framework for accurate mesh estimation. Unlike existing methods, which estimate a mesh from a single view and suffer from the ill-posed estimation problem in 3D space, the proposed framework takes asynchronous multi-view gait sequences as input and uses both multi-view and single-view streams to learn consistent and accurate mesh models for both multi-view and single-view sequences. After applying the proposed framework to the existing OU-MVLP database, we establish a large-scale gait database with human meshes (i.e., OUMVLP-Mesh), containing over 10,000 subjects and up to 14 view angles. Experimental results show that the proposed framework estimates human mesh models more accurately than similar methods, providing models of sufficient quality to improve the recognition performance of a baseline model-based gait recognition approach.
引用
收藏
页码:234 / 248
页数:15
相关论文
共 50 条
  • [1] Multi-view large population gait dataset and its performance evaluation for cross-view gait recognition
    Takemura N.
    Makihara Y.
    Muramatsu D.
    Echigo T.
    Yagi Y.
    IPSJ Transactions on Computer Vision and Applications, 2018, 10 (01)
  • [2] Performance Evaluation of Model-Based Gait on Multi-View Very Large Population Database with Pose Sequences
    An W.
    Yu S.
    Makihara Y.
    Wu X.
    Xu C.
    Yu Y.
    Liao R.
    Yagi Y.
    IEEE Transactions on Biometrics, Behavior, and Identity Science, 2020, 2 (04): : 421 - 430
  • [3] Gait Analysis of Gender and Age Using a Large-Scale Multi-view Gait Database
    Makihara, Yasushi
    Mannami, Hidetoshi
    Yagi, Yasushi
    COMPUTER VISION - ACCV 2010, PT II, 2011, 6493 : 440 - 451
  • [4] Multi-view Gait Fusion for Large Scale Human Identification in Surveillance Videos
    Hossain, Emdad
    Chetty, Girija
    ADVANCED CONCEPTS FOR INTELLIGENT VISION SYSTEMS (ACIVS 2012), 2012, 7517 : 527 - 537
  • [5] Large Scale Multi-view Stereopsis Evaluation
    Jensen, Rasmus
    Dahl, Anders
    Vogiatzis, George
    Tola, Engin
    Aanaes, Henrik
    2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 406 - 413
  • [6] A Survey of Multi-view Gait Recognition
    Wang K.-J.
    Ding X.-N.
    Xing X.-L.
    Liu M.-C.
    Zidonghua Xuebao/Acta Automatica Sinica, 2019, 45 (05): : 841 - 852
  • [7] Multi-view Gait Video Synthesis
    Xiang, Weilai
    Yang, Hongyu
    Huang, Di
    Wang, Yunhong
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 6783 - 6791
  • [8] The OU-ISIR Gait Database Comprising the Large Population Dataset and Performance Evaluation of Gait Recognition
    Iwama, Haruyuki
    Okumura, Mayu
    Makihara, Yasushi
    Yagi, Yasushi
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2012, 7 (05) : 1511 - 1521
  • [9] Simultaneously Recovering Multi-Person Meshes and Multi-View Cameras With Human Semantics
    Huang, Buzhen
    Ju, Jingyi
    Shu, Yuan
    Wang, Yangang
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (06) : 4229 - 4242
  • [10] Multi-View Gait Based Human Identification System with Covariate Analysis
    Ng, Hu
    Tan, Wooi-Haw
    Abdullah, Junaidi
    INTERNATIONAL ARAB JOURNAL OF INFORMATION TECHNOLOGY, 2013, 10 (05) : 519 - 526