Facial Performance Transfer via Deformable Models and Parametric Correspondence

被引:11
作者
Asthana, Akshay [1 ]
Delahunty, Miles [2 ]
Dhall, Abhinav [1 ]
Goecke, Roland [3 ]
机构
[1] Australian Natl Univ, RSISE, Canberra, ACT 0200, Australia
[2] Australian Natl Univ, Canberra, ACT 2601, Australia
[3] Univ Canberra, Fac Informat Sci & Engn, Canberra, ACT 2601, Australia
基金
澳大利亚研究理事会;
关键词
Active appearance models; facial performance transfer; face modeling and animation;
D O I
10.1109/TVCG.2011.157
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The issue of transferring facial performance from one person's face to another's has been an area of interest for the movie industry and the computer graphics community for quite some time. In recent years, deformable face models, such as the Active Appearance Model (AAM), have made it possible to track and synthesize faces in real time. Not surprisingly, deformable face model-based approaches for facial performance transfer have gained tremendous interest in the computer vision and graphics community. In this paper, we focus on the problem of real-time facial performance transfer using the AAM framework. We propose a novel approach of learning the mapping between the parameters of two completely independent AAMs, using them to facilitate the facial performance transfer in a more realistic manner than previous approaches. The main advantage of modeling this parametric correspondence is that it allows a "meaningful" transfer of both the nonrigid shape and texture across faces irrespective of the speakers' gender, shape, and size of the faces, and illumination conditions. We explore linear and nonlinear methods for modeling the parametric correspondence between the AAMs and show that the sparse linear regression method performs the best. Moreover, we show the utility of the proposed framework for a cross-language facial performance transfer that is an area of interest for the movie dubbing industry.
引用
收藏
页码:1511 / 1519
页数:9
相关论文
共 24 条
  • [1] [Anonymous], ADV NEURAL INFORM PR
  • [2] [Anonymous], 2001, Robotica, DOI DOI 10.1017/S0263574700223217
  • [3] [Anonymous], CMURITR0335
  • [4] Reanimating faces in images and video
    Blanz, V
    Basso, C
    Poggio, T
    Vetter, T
    [J]. COMPUTER GRAPHICS FORUM, 2003, 22 (03) : 641 - 650
  • [5] Boyle P, 2005, ADV NEURAL INFORM PR, P217
  • [6] Chang Yao-Jen., 2005, P 2005 ACM SIGGRAPHE, P143, DOI 10.1145/1073368.1073388
  • [7] Dhall A., 2011, P IEEE INT C AUT FAC
  • [8] Dhall A, 2010, LECT NOTES COMPUT SC, V6444, P485, DOI 10.1007/978-3-642-17534-3_60
  • [9] For most large underdetermined systems of linear equations the minimal l1-norm solution is also the sparsest solution
    Donoho, DL
    [J]. COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS, 2006, 59 (06) : 797 - 829
  • [10] Interpreting face images using Active Appearance Models
    Edwards, GJ
    Taylor, CJ
    Cootes, TF
    [J]. AUTOMATIC FACE AND GESTURE RECOGNITION - THIRD IEEE INTERNATIONAL CONFERENCE PROCEEDINGS, 1998, : 300 - 305