Real-time speech-driven face animation with expressions using neural networks

被引:70
作者
Hong, PY [1 ]
Wen, Z [1 ]
Huang, TS [1 ]
机构
[1] Univ Illinois, Beckman Inst Adv Sci & Technol, Urbana, IL 61801 USA
来源
IEEE TRANSACTIONS ON NEURAL NETWORKS | 2002年 / 13卷 / 04期
基金
美国国家科学基金会;
关键词
facial deformation modeling; facial motion analysis and synthesis; neural networks; real-time speech-driven; talking face with expressions;
D O I
10.1109/TNN.2002.1021892
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A real-time speech-driven synthetic talking face provides an effective multimodal communication interface in distributed collaboration environments. Nonverbal gestures such as facial expressions are important to human communication and should be considered by speech-driven face animation systems. In this paper, we present a framework that systematically addresses facial deformation modeling, automatic facial motion analysis, and real-time speech-driven face animation with expression using neural networks. Based on this framework, we learn a quantitative visual representation of the facial deformations, called the motion units (MUs). An facial deformation can be approximated by a linear combination of the MUs weighted by MU parameters (MVPs). We develop an MU-based facial motion tracking algorithm which is used to collect an audio-visual training database. Then, we construct a real-time audio-to-MUP mapping by training a set of neural networks using the collected audio-visual training database. The quantitative evaluation of the mapping shows the effectiveness of the proposed approach. Using the proposed method, we develop the functionality of real-time speech-driven face animation with expressions for the iFACE system. Experimental results show that the synthetic expressive talking face of the iFACE system is comparable with a real face in terms of the effectiveness of their influences on bimodal human emotion perception.
引用
收藏
页码:916 / 927
页数:12
相关论文
共 61 条
  • [31] LEUNG WH, 2000, IEEE INT C MULT EXP
  • [32] 3-D MOTION ESTIMATION IN MODEL-BASED FACIAL IMAGE-CODING
    LI, HB
    ROIVAINEN, P
    FORCHHEIMER, R
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1993, 15 (06) : 545 - 555
  • [33] LI Y, 2001, P ACM MULT OTT ON CA
  • [34] MASE K, 1991, IEICE TRANS COMMUN, V74, P3474
  • [35] Massaro D.W., 1987, Speech perception by ear and eye: A paradigm for psychological inquiry
  • [36] MASSARO DW, 1999, P AVSP 99
  • [37] A MEDIA CONVERSION FROM SPEECH TO FACIAL IMAGE FOR INTELLIGENT MAN-MACHINE INTERFACE
    MORISHIMA, S
    HARASHIMA, H
    [J]. IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, 1991, 9 (04) : 594 - 600
  • [38] MORISHIMA S, 1999, P IEEE INT C IM PROC
  • [39] MORISHIMA S, 1998, P INT C AUD VIS SPEE
  • [40] MORISHIMA S, 1989, P INT C AC SPEECH SI