Real-Time Facial Segmentation and Performance Capture from RGB Input

被引:62
作者
Saito, Shunsuke [1 ,2 ]
Li, Tianye [1 ,2 ]
Li, Hao [1 ,2 ]
机构
[1] Pinscreen, Santa Monica, CA 90401 USA
[2] Univ Sooth Calif, Los Angeles, CA USA
来源
COMPUTER VISION - ECCV 2016, PT VIII | 2016年 / 9912卷
关键词
Real-time facial performance capture; Face segmentation; Deep convolutional neural network; Regression; FACE ALIGNMENT; OCCLUSION; MODELS;
D O I
10.1007/978-3-319-46484-8_15
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce the concept of unconstrained real-time 3D facial performance capture through explicit semantic segmentation in the RGB input. To ensure robustness, cutting edge supervised learning approaches rely on large training datasets of face images captured in the wild. While impressive tracking quality has been demonstrated for faces that are largely visible, any occlusion due to hair, accessories, or hand-to-face gestures would result in significant visual artifacts and loss of tracking accuracy. The modeling of occlusions has been mostly avoided due to its immense space of appearance variability. To address this curse of high dimensionality, we perform tracking in unconstrained images assuming non-face regions can be fully masked out. Along with recent breakthroughs in deep learning, we demonstrate that pixel-level facial segmentation is possible in real-time by repurposing convolutional neural networks designed originally for general semantic segmentation. We develop an efficient architecture based on a two-stream deconvolution network with complementary characteristics, and introduce carefully designed training samples and data augmentation strategies for improved segmentation accuracy and robustness. We adopt a state-of-the-art regression-based facial tracking framework with segmented face images as training, and demonstrate accurate and uninterrupted facial performance capture in the presence of extreme occlusion and even side views. Furthermore, the resulting segmentation can be directly used to composite partial 3D face models on the input images and enable seamless facial manipulation tasks, such as virtual make-up or face replacement.
引用
收藏
页码:244 / 261
页数:18
相关论文
共 65 条
  • [1] [Anonymous], CVPR IN PRESS
  • [2] [Anonymous], 2014, Computer Science
  • [3] [Anonymous], P BRIT MACH VIS C BM
  • [4] [Anonymous], 2011, ADV NEURAL INF PROCE
  • [5] [Anonymous], ADV NEURAL INFORM PR
  • [6] [Anonymous], 2012, Gtav face database
  • [7] [Anonymous], 2014, P BRIT MACH VIS C 20
  • [8] [Anonymous], 2008, PROC WORKSHOP FACES
  • [9] [Anonymous], P BRIT MACH VIS C BM
  • [10] [Anonymous], BRIT MACH VIS C