HULC: 3D HUman Motion Capture with Pose Manifold SampLing and Dense Contact Guidance

被引:14
作者
Shimada, Soshi [1 ]
Golyanik, Vladislav [1 ]
Li, Zhi [1 ]
Perez, Patrick [2 ]
Xu, Weipeng [1 ]
Theobalt, Christian [1 ]
机构
[1] Max Planck Inst Informat, Saarland Informat Campus, Saarbrucken, Germany
[2] Valeo Ai, Paris, France
来源
COMPUTER VISION, ECCV 2022, PT XXII | 2022年 / 13682卷
关键词
3D Human MoCap; Dense contact estimations; Sampling;
D O I
10.1007/978-3-031-20047-2_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Marker-less monocular 3D human motion capture (MoCap) with scene interactions is a challenging research topic relevant for extended reality, robotics and virtual avatar generation. Due to the inherent depth ambiguity of monocular settings, 3D motions captured with existing methods often contain severe artefacts such as incorrect body-scene inter-penetrations, jitter and body floating. To tackle these issues, we propose HULC, a new approach for 3D human MoCap which is aware of the scene geometry. HULC estimates 3D poses and dense body-environment surface contacts for improved 3D localisations, as well as the absolute scale of the subject. Furthermore, we introduce a 3D pose trajectory optimisation based on a novel pose manifold sampling that resolves erroneous body-environment inter-penetrations. Although the proposed method requires less structured inputs compared to existing scene-aware monocular MoCap algorithms, it produces more physically-plausible poses: HULC significantly and consistently outperforms the existing approaches in various experiments and on different metrics. Project page: https://vcai.mpi-inf.mpg.de/projects/HULC/.
引用
收藏
页码:516 / 533
页数:18
相关论文
empty
未找到相关数据