Motion-X: A Large-scale 3D Expressive Whole-body Human Motion Dataset

被引:0
|
作者
Lin, Jing [1 ,2 ]
Zeng, Ailing [1 ]
Lu, Shunlin [1 ,3 ]
Cai, Yuanhao [2 ]
Zhang, Ruimao [3 ]
Wang, Haoqian [2 ]
Zhang, Lei [1 ]
机构
[1] Int Digital Econ Acad IDEA, Shenzhen, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
[3] Chinese Univ Hong Kong, Shenzhen, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present Motion-X, a large-scale 3D expressive whole-body motion dataset. Existing motion datasets predominantly contain body-only poses, lacking facial expressions, hand gestures, and fine-grained pose descriptions. Moreover, they are primarily collected from limited laboratory scenes with textual descriptions manually labeled, which greatly limits their scalability. To overcome these limitations, we develop a whole-body motion and text annotation pipeline, which can automatically annotate motion from either single- or multi-view videos and provide comprehensive semantic labels for each video and fine-grained whole-body pose descriptions for each frame. This pipeline is of high precision, cost-effective, and scalable for further research. Based on it, we construct Motion-X, which comprises 15.6M precise 3D whole-body pose annotations (i.e., SMPL-X) covering 81.1K motion sequences from massive scenes. Besides, Motion-X provides 15.6M frame-level whole-body pose descriptions and 81.1K sequence-level semantic labels. Comprehensive experiments demonstrate the accuracy of the annotation pipeline and the significant benefit of Motion-X in enhancing expressive, diverse, and natural motion generation, as well as 3D whole-body human mesh recovery.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Unifying Representations and Large-Scale Whole-Body Motion Databases for Studying Human Motion
    Mandery, Christian
    Terlemez, Oemer
    Do, Martin
    Vahrenkamp, Nikolaus
    Asfour, Tamim
    IEEE TRANSACTIONS ON ROBOTICS, 2016, 32 (04) : 796 - 809
  • [2] Expressive Forecasting of 3D Whole-Body Human Motions
    Ding, Pengxiang
    Cui, Qiongjie
    Wang, Haofan
    Zhang, Min
    Liu, Mengyuan
    Wang, Donglin
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 2, 2024, : 1537 - 1545
  • [3] SignAvatars: A Large-Scale 3D Sign Language Holistic Motion Dataset and Benchmark
    Yu, Zhengdi
    Huang, Shaoli
    Cheng, Yongkang
    Birdal, Tolga
    COMPUTER VISION - ECCV 2024, PT V, 2025, 15063 : 1 - 19
  • [4] Expressive Whole-Body 3D Gaussian Avatar
    Moon, Gyeongsik
    Shiratori, Takaaki
    Saito, Shunsuke
    COMPUTER VISION - ECCV 2024, PT XLI, 2025, 15099 : 19 - 35
  • [5] Human to Robot Whole-Body Motion Transfer
    Arduengo, Miguel
    Arduengo, Ana
    Colome, Adria
    Lobo-Prat, Joan
    Torras, Carme
    PROCEEDINGS OF THE 2020 IEEE-RAS 20TH INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS 2020), 2021, : 299 - 305
  • [6] The KIT Whole-Body Human Motion Database
    Mandery, Christian
    Terlemez, Oemer
    Do, Martin
    Vahrenkamp, Nikolaus
    Asfour, Tamim
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON ADVANCED ROBOTICS (ICAR), 2015, : 329 - 336
  • [7] A Large-Scale 3D Object Recognition dataset
    Solund, Thomas
    Buch, Anders Glent
    Kruger, Norbert
    Aanaes, Henrik
    PROCEEDINGS OF 2016 FOURTH INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2016, : 73 - 82
  • [8] 3D Object Detection on large-scale dataset
    Zhao, Yan
    Zhu, Jihong
    Liang, Haoyu
    Chen, Lyujie
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [9] HardMo: A Large-Scale Hardcase Dataset for Motion Capture
    Liao, Jiaqi
    Luo, Chuanchen
    Du, Yinuo
    Wang, Yuxi
    Yin, Xucheng
    Zhang, Man
    Zhang, Zhaoxiang
    Peng, Junran
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, : 1629 - 1638
  • [10] Dimensionality Reduction for Whole-Body Human Motion Recognition
    Mandery, Christian
    Plappert, Matthias
    Borras, Julia
    Asfour, Tamim
    2016 19TH INTERNATIONAL CONFERENCE ON INFORMATION FUSION (FUSION), 2016, : 355 - 362