Synthesizing performance-driven facial animation

被引:0
|
作者
Luo, Chang-Wei [1 ]
Yu, Jun [1 ]
Wang, Zeng-Fu [1 ,2 ,3 ]
机构
[1] Department of Automation, University of Science and Technology of China, Hefei,230027, China
[2] Institute of Intelligent Machines, Chinese Academy of Sciences, Hefei,230031, China
[3] National Laboratory of Speech and Language Information Processing, University of Science and Technology of China, Hefei,230027, China
来源
Zidonghua Xuebao/Acta Automatica Sinica | 2014年 / 40卷 / 10期
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Face recognition - Animation;
D O I
10.3724/SP.J.1004.2014.02245
中图分类号
R318.08 [生物材料学]; Q [生物科学];
学科分类号
07 ; 0710 ; 0805 ; 080501 ; 080502 ; 09 ;
摘要
In this paper, we present a system for real-time performance-driven facial animation. With the system, the user can control the facial expression of a digital character by acting out the desired facial action in front of an ordinary camera. First, we create a muscle-based 3D face model. The muscle actuation parameters are used to animate the face model. To increase the reality of facial animation, the orbicularis oris in our face model is divided into the inner part and outer part. We also establish the relationship between jaw rotation and facial surface deformation. Second, a real-time facial tracking method is employed to track the facial features of a performer in the video. Finally, the tracked facial feature points are used to estimate muscle actuation parameters to drive the face model. Experimental results show that our system runs in real time and outputs realistic facial animations. Compared with most existing performance-based facial animation systems, ours does not require facial markers, intrusive lighting, or special scanning equipment, thus it is inexpensive and easy to use. Copyright © 2014 Acta Automatica Sinica. All rights reserved.
引用
收藏
页码:2245 / 2252
相关论文
empty
未找到相关数据