Speech driven realistic mouth animation based on multi-modal unit selection

被引:6
作者
Jiang, Dongmei [1 ]
Ravyse, Ilse [2 ]
Sahli, Hichem [2 ]
Verhelst, Werner [2 ]
机构
[1] Northwestern Polytech Univ, Sch Comp Sci, Xian 710072, Peoples R China
[2] Vrije Univ Brussel, Dept ETRO, B-1050 Brussels, Belgium
基金
中国国家自然科学基金;
关键词
Mouth animation; Audio visual diviseme instance selection; Concatenation smoothness; Pronunciation distance; Intensity distance;
D O I
10.1007/s12193-009-0015-7
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper presents a novel audio visual diviseme (viseme pair) instance selection and concatenation method for speech driven photo realistic mouth animation. Firstly, an audio visual diviseme database is built consisting of the audio feature sequences, intensity sequences and visual feature sequences of the instances. In the Viterbi based diviseme instance selection, we set the accumulative cost as the weighted sum of three items: 1) logarithm of concatenation smoothness of the synthesized mouth trajectory; 2) logarithm of the pronunciation distance; 3) logarithm of the audio intensity distance between the candidate diviseme instance and the target diviseme segment in the incoming speech. The selected diviseme instances are time warped and blended to construct the mouth animation. Objective and subjective evaluations on the synthesized mouth animations prove that the multimodal diviseme instance selection algorithm proposed in this paper outperforms the triphone unit selection algorithm in Video Rewrite. Clear, accurate, smooth mouth animations can be obtained matching well with the pronunciation and intensity changes in the incoming speech. Moreover, with the logarithm function in the accumulative cost, it is easy to set the weights to obtain optimal mouth animations.
引用
收藏
页码:157 / 169
页数:13
相关论文
共 30 条
  • [1] Aleksic PS, 2003, 2003 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL 3, PROCEEDINGS, P1
  • [2] Bregler C., 1997, Computer Graphics Proceedings, SIGGRAPH 97, P353, DOI 10.1145/258734.258880
  • [3] CAO Y., 2004, P ACM SIGGRAPH EUR S, P347
  • [4] Hidden markov model inversion for audio-to-visual conversion in an MPEG-4 facial animation system
    Choi, KH
    Luo, Y
    Hwang, JN
    [J]. JOURNAL OF VLSI SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2001, 29 (1-2): : 51 - 61
  • [5] Sample-based synthesis of photo-realistic talking heads
    Cosatto, E
    Graf, HP
    [J]. COMPUTER ANIMATION 98 - PROCEEDINGS, 1998, : 103 - 110
  • [6] Cosatto E, 2000, 2000 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, PROCEEDINGS VOLS I-III, P619, DOI 10.1109/ICME.2000.871439
  • [7] Speech driven facial animation using a hidden Markov coarticulation model
    Cosker, D
    Marshall, D
    Rosin, PL
    Hicks, Y
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 1, 2004, : 128 - 131
  • [8] Deng Z, 2006, IEEE T VIS COMPUT GR, V12, P1
  • [9] Dongmei Jiang, 2008, 2008 IEEE 10th Workshop on Multimedia Signal Processing (MMSP), P906, DOI 10.1109/MMSP.2008.4665203
  • [10] Visual speech synthesis by morphing visemes
    Ezzat, T
    Poggio, T
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2000, 38 (01) : 45 - 57