Joint 3D facial shape reconstruction and texture completion from a single image

被引:1
|
作者
Xiaoxing Zeng [1 ,2 ]
Zhelun Wu [1 ]
Xiaojiang Peng [1 ]
Yu Qiao [1 ]
机构
[1] Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
[2] University of Chinese Academy of Sciences
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP391.41 [];
学科分类号
080203 ;
摘要
Recent years have witnessed significant progress in image-based 3D face reconstruction using deep convolutional neural networks. However, current reconstruction methods often perform improperly in self-occluded regions and can lead to inaccurate correspondences between a 2D input image and a3D face template, hindering use in real applications.To address these problems, we propose a deep shape reconstruction and texture completion network, SRTCNet, which jointly reconstructs 3D facial geometry and completes texture with correspondences from a single input face image. In SRTC-Net, we leverage the geometric cues from completed 3D texture to reconstruct detailed structures of 3D shapes. The SRTC-Net pipeline has three stages.The first introduces a correspondence network to identify pixelwise correspondence between the input 2D image and a 3D template model, and transfers the input 2D image to a U-V texture map. Then we complete the invisible and occluded areas in the U-V texture map using an inpainting network. To get the 3D facial geometries,we predict coarse shape(U-V position maps) from the segmented face from the correspondence network using a shape network, and then refine the 3D coarse shape by regressing the U-V displacement map from the completed U-V texture map in a pixel-to-pixel way. We examine our methods on 3D reconstruction tasks as well as face frontalization and pose invariant face recognition tasks, using both in-the-lab datasets(MICC, MultiPIE) and in-the-wild datasets(CFP). The qualitative and quantitative results demonstrate the effectiveness of our methods on inferring 3D facial geometry and complete texture; they outperform or are comparable to the state-of-the-art.
引用
收藏
页码:239 / 256
页数:18
相关论文
共 50 条
  • [31] Single-Shot 3D Multi-Person Shape Reconstruction from a Single RGB Image
    Kim, Seong Hyun
    Chang, Ju Yong
    ENTROPY, 2020, 22 (08)
  • [32] Regressing a 3D Face Shape from a Single Image
    Tulyakov, Sergey
    Sebe, Nicu
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 3748 - 3755
  • [33] Fast and Precise Face Alignment and 3D Shape Reconstruction from a Single 2D Image
    Zhao, Ruiqi
    Wang, Yan
    Benitez-Quiroz, C. Fabian
    Liu, Yaojie
    Martinez, Aleix M.
    COMPUTER VISION - ECCV 2016 WORKSHOPS, PT II, 2016, 9914 : 590 - 603
  • [34] 3D Room Reconstruction from A Single Fisheye Image
    Li, Mingyang
    Zhou, Yi
    Meng, Ming
    Wang, Yuehua
    Zhou, Zhong
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [35] 3D corrective nose reconstruction from a single image
    Yanlong Tang
    Yun Zhang
    Xiaoguang Han
    Fang-Lue Zhang
    Yu-Kun Lai
    Ruofeng Tong
    ComputationalVisualMedia, 2022, 8 (02) : 225 - 237
  • [36] 3D tree models reconstruction from a single image
    Zeng, Jiguo
    Zhang, Yan
    Zhan, Shouyi
    ISDA 2006: SIXTH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS DESIGN AND APPLICATIONS, VOL 2, 2006, : 445 - +
  • [37] PushNet: 3D reconstruction from a single image by pushing
    Ping, Guiju
    Wang, Han
    NEURAL COMPUTING & APPLICATIONS, 2024, 36 (12): : 6629 - 6641
  • [38] 3D RECONSTRUCTION BASED ON GAT FROM A SINGLE IMAGE
    Yang Dongsheng
    Kuang Ping
    Gu Xiaofeng
    2020 17TH INTERNATIONAL COMPUTER CONFERENCE ON WAVELET ACTIVE MEDIA TECHNOLOGY AND INFORMATION PROCESSING (ICCWAMTIP), 2020, : 122 - 125
  • [39] 3D corrective nose reconstruction from a single image
    Yanlong Tang
    Yun Zhang
    Xiaoguang Han
    Fang-Lue Zhang
    Yu-Kun Lai
    Ruofeng Tong
    Computational Visual Media, 2022, 8 : 225 - 237
  • [40] PushNet: 3D reconstruction from a single image by pushing
    Guiju Ping
    Han Wang
    Neural Computing and Applications, 2024, 36 : 6629 - 6641