Dense 3D Face Reconstruction from a Single RGB Image

被引:0
作者
Mao, Jianxu [1 ]
Zhang, Yifeng [1 ]
Liu, Caiping [2 ]
Tao, Ziming [1 ]
Yi, Junfei [1 ]
Wang, Yaonan [1 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Changsha, Peoples R China
[2] Hunan Univ, Coll Comp Sci & Elect Engn, Changsha, Peoples R China
来源
2022 IEEE 25TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND ENGINEERING, CSE | 2022年
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
3D face reconstruction; depth estimation; Convolutional Neural Network; Single Monocular Images; MODEL;
D O I
10.1109/CSE57773.2022.00013
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Monocular 3D face reconstruction is a computer vision problem of extraordinary difficulty. Restrictions of large poses and facial details(such as wrinkles, moles, beards etc.) are the common deficiencies of the most existing monocular 3D face reconstruction methods. To resolve the two defects, we propose an end-to-end system to provide 3D reconstructions of faces with details which express robustly under various backgrounds, pose rotations and occlusions. To obtain the facial detail informations, we leverage the image-to-image translation network (we call it p2p-net for short) to make pixel to pixel estimation from the input RGB image to depth map. This precise per-pixel estimation can provide depth value for facial details. And we use a procedure similar to image inpainting to recover the missing details. Simultaneously, for adapting pose rotation and resolving occlusions, we use CNNs to estimate a basic facial model based on 3D Morphable Model(3DMM), which can compensate the unseen facial part in the input image and decrease the deviation of final 3D model by fitting with the dense depth map. We propose an Identity Shape Loss function to enhance the basic facial model and we add a Multi-view Identity Loss that compare the features of the 3D face fusion and the ground truth from multi-view angles. The training data for p2p-net is from 3D scanning system, and we augment the dataset to a larger magnitude for a more generic training. Comparing to other state-of-the-art methods of 3D face reconstruction, we evaluate our method on in-the-wild face images. the qualitative and quantitative comparison show that our method performs both well on robustness and accuracy especially when facing non-frontal pose problems.
引用
收藏
页码:24 / 31
页数:8
相关论文
共 39 条
  • [1] Amberg B, 2007, IEEE I CONF COMP VIS, P1326
  • [2] [Anonymous], 2004, 2004 C COMPUTER VISI
  • [3] Modeling Facial Geometry using Compositional VAEs
    Bagautdinov, Timur
    Wu, Chenglei
    Saragih, Jason
    Fua, Pascal
    Sheikh, Yaser
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3877 - 3886
  • [4] Bagdanov A. D., 2011, P JOINT ACM WORKSH H, P79
  • [5] Enhancing the Performance of Active Shape Models in Face Recognition Applications
    Behaine, Carlos A. R.
    Scharcanski, Jacob
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2012, 61 (08) : 2330 - 2333
  • [6] A morphable model for the synthesis of 3D faces
    Blanz, V
    Vetter, T
    [J]. SIGGRAPH 99 CONFERENCE PROCEEDINGS, 1999, : 187 - 194
  • [7] Face recognition based on fitting a 3D morphable model
    Blanz, V
    Vetter, T
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2003, 25 (09) : 1063 - 1074
  • [8] A Groupwise Multilinear Correspondence Optimization for 3D Faces
    Bolkart, Timo
    Wuhrer, Stefanie
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 3604 - 3612
  • [9] 3D Face Morphable Models "In-the-Wild"
    Booth, James
    Antonakos, Epameinondas
    Ploumpis, Stylianos
    Trigeorgis, George
    Panagakis, Yannis
    Zafeiriou, Stefanos
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 5464 - 5473
  • [10] A 3D Morphable Model learnt from 10,000 faces
    Booth, James
    Roussos, Anastasios
    Zafeiriou, Stefanos
    Ponniah, Allan
    Dunaway, David
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 5543 - 5552