Playing for 3D Human Recovery

被引:1
作者
Cai, Zhongang [1 ,2 ]
Zhang, Mingyuan [1 ]
Ren, Jiawei [1 ]
Wei, Chen [2 ]
Ren, Daxuan [1 ]
Lin, Zhengyu [2 ]
Zhao, Haiyu [2 ]
Yang, Lei [2 ]
Loy, Chen Change [1 ]
Liu, Ziwei [1 ]
机构
[1] Nanyang Technol Univ, S Lab, Singapore 639798, Singapore
[2] Shanghai AI Lab, Shanghai 200240, Peoples R China
关键词
Three-dimensional displays; Annotations; Synthetic data; Shape; Training; Parametric statistics; Solid modeling; Human pose and shape estimation; 3D human recovery; parametric humans; synthetic data; dataset;
D O I
10.1109/TPAMI.2024.3450537
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image- and video-based 3D human recovery (i.e., pose and shape estimation) have achieved substantial progress. However, due to the prohibitive cost of motion capture, existing datasets are often limited in scale and diversity. In this work, we obtain massive human sequences by playing the video game with automatically annotated 3D ground truths. Specifically, we contribute GTA-Human, a large-scale 3D human dataset generated with the GTA-V game engine, featuring a highly diverse set of subjects, actions, and scenarios. More importantly, we study the use of game-playing data and obtain five major insights. First, game-playing data is surprisingly effective. A simple frame-based baseline trained on GTA-Human outperforms more sophisticated methods by a large margin. For video-based methods, GTA-Human is even on par with the in-domain training set. Second, we discover that synthetic data provides critical complements to the real data that is typically collected indoor. We highlight that our investigation into domain gap provides explanations for our data mixture strategies that are simple yet useful, which offers new insights to the research community. Third, the scale of the dataset matters. The performance boost is closely related to the additional data available. A systematic study on multiple key factors (such as camera angle and body pose) reveals that the model performance is sensitive to data density. Fourth, the effectiveness of GTA-Human is also attributed to the rich collection of strong supervision labels (SMPL parameters), which are otherwise expensive to acquire in real datasets. Fifth, the benefits of synthetic data extend to larger models such as deeper convolutional neural networks (CNNs) and Transformers, for which a significant impact is also observed. We hope our work could pave the way for scaling up 3D human recovery to the real world.
引用
收藏
页码:10533 / 10545
页数:13
相关论文
共 77 条
  • [21] SAIL-VOS 3D: A Synthetic Dataset and Baselines for Object Detection and 3D Mesh Reconstruction from Video Data
    Hu, Yuan-Ting
    Wang, Jiahong
    Yeh, Raymond A.
    Schwing, Alexander G.
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 1418 - 1428
  • [22] Towards Accurate Marker-less Human Shape and Pose Estimation over Time
    Huang, Yinghao
    Bogo, Federica
    Lassner, Christoph
    Kanazawa, Angjoo
    Gehler, Peter, V
    Romero, Javier
    Akhter, Ijaz
    Black, Michael J.
    [J]. PROCEEDINGS 2017 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2017, : 421 - 430
  • [23] Human3.6M: Large Scale Datasets and Predictive Methods for 3D Human Sensing in Natural Environments
    Ionescu, Catalin
    Papava, Dragos
    Olaru, Vlad
    Sminchisescu, Cristian
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2014, 36 (07) : 1325 - 1339
  • [24] Jie Song, 2020, Computer Vision - ECCV 2020. 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12365), P744, DOI 10.1007/978-3-030-58565-5_44
  • [25] Johnson S, 2010, BMVC, DOI [10.5244/C.24.12, DOI 10.5244/C.24.12]
  • [26] Johnson S, 2011, PROC CVPR IEEE, P1465, DOI 10.1109/CVPR.2011.5995318
  • [27] Joo H, 2021, Arxiv, DOI arXiv:2004.03686
  • [28] Panoptic Studio: A Massively Multiview System for Social Interaction Capture
    Joo, Hanbyul
    Simon, Tomas
    Li, Xulong
    Liu, Hao
    Tan, Lei
    Gui, Lin
    Banerjee, Sean
    Godisart, Timothy
    Nabbe, Bart
    Matthews, Iain
    Kanade, Takeo
    Nobuhara, Shohei
    Sheikh, Yaser
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (01) : 190 - 204
  • [29] Learning 3D Human Dynamics from Video
    Kanazawa, Angjoo
    Zhang, Jason Y.
    Felsen, Panna
    Malik, Jitendra
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 5597 - 5606
  • [30] End-to-end Recovery of Human Shape and Pose
    Kanazawa, Angjoo
    Black, Michael J.
    Jacobs, David W.
    Malik, Jitendra
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7122 - 7131