Speech driven realistic mouth animation based on multi-modal unit selection

被引:6
作者
Jiang, Dongmei [1 ]
Ravyse, Ilse [2 ]
Sahli, Hichem [2 ]
Verhelst, Werner [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[2] Vrije Univ Brussel, Dept ETRO, B-1050 Brussels, Belgium
基金
中国国家自然科学基金;
关键词
Mouth animation; Audio visual diviseme instance selection; Concatenation smoothness; Pronunciation distance; Intensity distance;
D O I
10.1007/s12193-009-0015-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a novel audio visual diviseme (viseme pair) instance selection and concatenation method for speech driven photo realistic mouth animation. Firstly, an audio visual diviseme database is built consisting of the audio feature sequences, intensity sequences and visual feature sequences of the instances. In the Viterbi based diviseme instance selection, we set the accumulative cost as the weighted sum of three items: 1) logarithm of concatenation smoothness of the synthesized mouth trajectory; 2) logarithm of the pronunciation distance; 3) logarithm of the audio intensity distance between the candidate diviseme instance and the target diviseme segment in the incoming speech. The selected diviseme instances are time warped and blended to construct the mouth animation. Objective and subjective evaluations on the synthesized mouth animations prove that the multimodal diviseme instance selection algorithm proposed in this paper outperforms the triphone unit selection algorithm in Video Rewrite. Clear, accurate, smooth mouth animations can be obtained matching well with the pronunciation and intensity changes in the incoming speech. Moreover, with the logarithm function in the accumulative cost, it is easy to set the weights to obtain optimal mouth animations.
引用
收藏
页码:157 / 169
页数:13
相关论文
共 30 条
  • [11] Ezzat T, 2002, ACM T GRAPHIC, V21, P388, DOI 10.1145/566570.566594
  • [12] Fagel S, 2004, P 8 INT C SPOK LANG, P2033
  • [13] Hou Y, 2007, LECT NOTES COMPUT SC, V4678, P340
  • [14] Huang FJ, 2002, INT CONF ACOUST SPEE, P2037
  • [15] Jiang DM, 2002, 2002 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-4, PROCEEDINGS, P2097, DOI 10.1109/ICMLC.2002.1175408
  • [16] Multi-stream asynchrony modeling for audio-visual speech recognition
    Lv, Guoyun
    Jiang, Dongmei
    Zhao, Rongchun
    Hou, Yunshu
    [J]. ISM 2007: NINTH IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA, PROCEEDINGS, 2007, : 37 - 44
  • [17] Ma J, 2006, IEEE T VIS COMPUT GR, V12, P1
  • [18] Accurate automatic visible speech synthesis of arbitrary 3D models based on concatenation of diviseme motion capture data
    Ma, JY
    Cole, R
    Pellom, B
    Ward, W
    Wise, B
    [J]. COMPUTER ANIMATION AND VIRTUAL WORLDS, 2004, 15 (05) : 485 - 500
  • [19] Massaro D. W., 1998, PERCEIVING TALKING F
  • [20] HEARING LIPS AND SEEING VOICES
    MCGURK, H
    MACDONALD, J
    [J]. NATURE, 1976, 264 (5588) : 746 - 748