Monocular Expressive Body Regression Through Body-Driven Attention

被引:171
作者
Choutas, Vasileios [1 ,2 ]
Pavlakos, Georgios [3 ]
Bolkart, Timo [1 ]
Tzionas, Dimitrios [1 ]
Black, Michael J. [1 ]
机构
[1] Max Planck Inst Intelligent Syst, Tubingen, Germany
[2] Max Planck ETH Ctr Learning Syst, Tubingen, Germany
[3] Univ Penn, Philadelphia, PA USA
来源
COMPUTER VISION - ECCV 2020, PT X | 2020年 / 12355卷
关键词
HAND POSE ESTIMATION; 3D; SHAPE; TRACKING; CAPTURE; PEOPLE; MODEL;
D O I
10.1007/978-3-030-58607-2_2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To understand how people look, interact, or perform tasks, we need to quickly and accurately capture their 3D body, face, and hands together from an RGB image. Most existing methods focus only on parts of the body. A few recent approaches reconstruct full expressive 3D humans from images using 3D body models that include the face and hands. These methods are optimization-based and thus slow, prone to local optima, and require 2D keypoints as input. We address these limitations by introducing ExPose (EXpressive POse and Shape rEgression), which directly regresses the body, face, and hands, in SMPL-X format, from an RGB image. This is a hard problem due to the high dimensionality of the body and the lack of expressive training data. Additionally, hands and faces are much smaller than the body, occupying very few image pixels. This makes hand and face estimation hard when body images are downscaled for neural networks. We make three main contributions. First, we account for the lack of training data by curating a dataset of SMPL-X fits on in-the-wild images. Second, we observe that body estimation localizes the face and hands reasonably well. We introduce body-driven attention for face and hand regions in the original image to extract higher-resolution crops that are fed to dedicated refinement modules. Third, these modules exploit part-specific knowledge from existing face- and hand-only datasets. ExPose estimates expressive 3D humans more accurately than existing optimization methods at a small fraction of the computational cost. Our data, model and code are available for research at https://expose.is.tue.mpg.de.
引用
收藏
页码:20 / 40
页数:21
相关论文
共 106 条
[61]   A survey of advances in vision-based human motion capture and analysis [J].
Moeslund, Thomas B. ;
Hilton, Adrian ;
Kruger, Volker .
COMPUTER VISION AND IMAGE UNDERSTANDING, 2006, 104 (2-3) :90-126
[62]   GANerated Hands for Real-Time 3D Hand Tracking from Monocular RGB [J].
Mueller, Franziska ;
Bernard, Florian ;
Sotnychenko, Oleksandr ;
Mehta, Dushyant ;
Sridhar, Srinath ;
Casas, Dan ;
Theobalt, Christian .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :49-59
[63]   Stacked Hourglass Networks for Human Pose Estimation [J].
Newell, Alejandro ;
Yang, Kaiyu ;
Deng, Jia .
COMPUTER VISION - ECCV 2016, PT VIII, 2016, 9912 :483-499
[64]   Neural Body Fitting: Unifying Deep Learning and Model Based Human Pose and Shape Estimation [J].
Omran, Mohamed ;
Lassner, Christoph ;
Pons-Moll, Gerard ;
Gehler, Peter V. ;
Schiele, Bernt .
2018 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), 2018, :484-494
[65]  
Paszke A, 2019, ADV NEUR IN, V32
[66]   Expressive Body Capture: 3D Hands, Face, and Body from a Single Image [J].
Pavlakos, Georgios ;
Choutas, Vasileios ;
Ghorbani, Nima ;
Bolkart, Timo ;
Osman, Ahmed A. A. ;
Tzionas, Dimitrios ;
Black, Michael J. .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10967-10977
[67]   Learning to Estimate 3D Human Pose and Shape from a Single Color Image [J].
Pavlakos, Georgios ;
Zhu, Luyang ;
Zhou, Xiaowei ;
Daniilidis, Kostas .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :459-468
[68]   Coarse-to-Fine Volumetric Prediction for Single-Image 3D Human Pose [J].
Pavlakos, Georgios ;
Zhou, Xiaowei ;
Derpanis, Konstantinos G. ;
Daniilidis, Kostas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1263-1272
[69]  
Robinette K. M., 2002, Technical report. AFRL-HE-WP-TR-2002-0169
[70]  
Rogez G, 2016, ADV NEUR IN, V29