A comprehensive system for facial animation of generic 3D head models driven by speech

被引:0
作者
Lucas D Terissi
Mauricio Cerda
Juan C Gómez
Nancy Hitschfeld-Kahler
Bernard Girau
机构
[1] Universidad Nacional de Rosario and CIFASIS,Laboratory for System Dynamics & Signal Processing
[2] Universidad de Chile,SCIAN
[3] Universidad de Chile,Lab, Faculty of Medicine
[4] Loria - INRIA Nancy Grand Est,Computer Science Department, FCFyM
[5] Cortex Team,undefined
来源
EURASIP Journal on Audio, Speech, and Music Processing | / 2013卷
关键词
Facial animation; Hidden Markov models; Audio-visual speech processing;
D O I
暂无
中图分类号
学科分类号
摘要
A comprehensive system for facial animation of generic 3D head models driven by speech is presented in this article. In the training stage, audio-visual information is extracted from audio-visual training data, and then used to compute the parameters of a single joint audio-visual hidden Markov model (AV-HMM). In contrast to most of the methods in the literature, the proposed approach does not require segmentation/classification processing stages of the audio-visual data, avoiding the error propagation related to these procedures. The trained AV-HMM provides a compact representation of the audio-visual data, without the need of phoneme (word) segmentation, which makes it adaptable to different languages. Visual features are estimated from the speech signal based on the inversion of the AV-HMM. The estimated visual speech features are used to animate a simple face model. The animation of a more complex head model is then obtained by automatically mapping the deformation of the simple model to it, using a small number of control points for the interpolation. The proposed algorithm allows the animation of 3D head models of arbitrary complexity through a simple setup procedure. The resulting animation is evaluated in terms of intelligibility of visual speech through perceptual tests, showing a promising performance. The computational complexity of the proposed system is analyzed, showing the feasibility of its real-time implementation.
引用
收藏
相关论文
共 90 条
[1]  
Choe B(2001)Performance-driven muscle-based facial animation J. Visual. Comput. Animat 12 67-79
[2]  
Lee H(2006)Speaker-independent 3D face synthesis driven by speech and text Signal Process 86 2932-2951
[3]  
Ko HS(2008)Building highly realistic facial modeling and animation: a survey Visual Comput 28 13-30
[4]  
Savrana A(2005)Speech-driven facial animation with realistic dynamics IEEE Trans. Multimedia 7 33-42
[5]  
Arslana LM(2006)Expressive facial animation synthesis by learning speech coarticulation and expression spaces IEEE Trans. Visual. Comput. Graph 12 1523-1534
[6]  
Akarunb L(1996)Generating facial expressions for speech Cognitive Sci 20 1-46
[7]  
Ersotelos N(1998)Lip movement synthesis from speech based on Hidden Markov Models Speech Commun 26 105-115
[8]  
Dong F(2002)Real-time speech-driven face animation with expressions using neural networks IEEE Trans. Neural Netws 13 916-927
[9]  
Gutierrez-Osuna R(1998)Audio-to-visual conversion for multimedia communication IEEE Trans. Indus. Electron 45 15-22
[10]  
Kakumanu PK(2005)Audio/visual mapping with cross-modal Hidden Markov Models IEEE Trans. Multimedia 7 243-252