3D3M: 3D Modulated Morphable Model for Monocular Face Reconstruction

被引:3
作者
Li, Yong [1 ,2 ]
Hao, Qiang [3 ]
Hu, Jianguo [3 ]
Pan, Xinmiao [3 ]
Li, Zechao [1 ,2 ]
Cui, Zhen [1 ,2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, PCA Lab,Minist Educ, Key Lab Intelligent Percept & Syst High Dimens In, Nanjing 210094, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Jiangsu Key Lab Image & Video Understanding Socia, Nanjing 210094, Peoples R China
[3] MINIVISION Co Ltd, Nanjing 210000, Peoples R China
基金
中国国家自然科学基金;
关键词
Faces; Three-dimensional displays; Shape; Image reconstruction; Face recognition; Solid modeling; Codes; 3D face reconstruction; dense shape correspondence; self-supervised learning; SINGLE IMAGE; SHAPE;
D O I
10.1109/TMM.2022.3212282
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
3D face reconstruction from a single image is a vital task in various multimedia applications. A key challenge for 3D face shape reconstruction is to build the correct dense face correspondence between the monocular input face and the deformable mesh. Most existing methods rely on shape labels fitted by traditional methods or strong priors such as multi-view geometry consistency. In contrast, we propose an innovative 3D Modulated Morphable Model (3D3M) to learn the dense shape correspondence from monocular images in a self-supervised manner. Specifically, given a batch of input faces, 3D3M encodes their 3DMM attributes (shape, texture, lighting, etc.) and then randomly shuffles the 3DMM attributes to generate the attribute-changed faces. The attribute-changed faces can be encoded and rendered back in a cycle-consistent manner, which enables us to utilize the self-supervised consistencies in dense mesh vertices and reconstructed pixels. The dense shape and pixel correspondence enable us to adopt a series of self-supervised constraints to fit the 3D face model accurately and learn the per-vertex correctives end-to-end. 3D3M builds excellent high-quality 3D face reconstruction results from monocular images. Both quantitative and qualitative experimental results have verified the superiority of 3D3M over prior arts on 3D face reconstruction and face alignment.
引用
收藏
页码:6642 / 6652
页数:11
相关论文
共 50 条
[1]  
Amberg B, 2008, IEEE INT CONF AUTOMA, P667
[2]   Robust Discriminative Response Map Fitting with Constrained Local Models [J].
Asthana, Akshay ;
Zafeiriou, Stefanos ;
Cheng, Shiyang ;
Pantic, Maja .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :3444-3451
[3]  
Bagdanov AD, 2011, P 2011 JOINT ACM WOR, P79
[4]   Faster Than Real-time Facial Alignment: A 3D Spatial Transformer Network Approach in Unconstrained Poses [J].
Bhagavatula, Chandrasekhar ;
Zhu, Chenchen ;
Luu, Khoa ;
Savvides, Marios .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :4000-4009
[5]   A morphable model for the synthesis of 3D faces [J].
Blanz, V ;
Vetter, T .
SIGGRAPH 99 CONFERENCE PROCEEDINGS, 1999, :187-194
[6]   Face recognition based on fitting a 3D morphable model [J].
Blanz, V ;
Vetter, T .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2003, 25 (09) :1063-1074
[7]   How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks) [J].
Bulat, Adrian ;
Tzimiropoulos, Georgios .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1021-1030
[8]   Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources [J].
Bulat, Adrian ;
Tzimiropoulos, Georgios .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3726-3734
[9]   FaceWarehouse: A 3D Facial Expression Database for Visual Computing [J].
Cao, Chen ;
Weng, Yanlin ;
Zhou, Shun ;
Tong, Yiying ;
Zhou, Kun .
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2014, 20 (03) :413-425
[10]   VGGFace2: A dataset for recognising faces across pose and age [J].
Cao, Qiong ;
Shen, Li ;
Xie, Weidi ;
Parkhi, Omkar M. ;
Zisserman, Andrew .
PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, :67-74