The Effect of Real-Time Constraints on Automatic Speech Animation

被引:4
|
作者
Websdale, Danny [1 ]
Taylor, Sarah [1 ]
Milner, Ben [1 ]
机构
[1] Univ East Anglia, Norwich, Norfolk, England
基金
英国工程与自然科学研究理事会;
关键词
Real-time speech animation; automatic lip sync;
D O I
10.21437/Interspeech.2018-2066
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning has previously been applied successfully to speech-driven facial animation. To account for carry-over and anticipatory coarticulation a common approach is to predict the facial pose using a symmetric window of acoustic speech that includes both past and future context. Using future context limits this approach for animating the faces of characters in real-time and networked applications, such as online gaming. An acceptable latency for conversational speech is 200ms and typically network transmission times will consume a significant part of this. Consequently, we consider asymmetric windows by investigating the extent to which decreasing the future context effects the quality of predicted animation using both deep neural networks (DNNs) and bi-directional LSTM recurrent neural networks (BiLSTMs). Specifically we investigate future contexts from 170ms (fully-symmetric) to 0ms (fully asymmetric). We find that a BiLSTM trained using 70ms of future context is able to predict facial motion of equivalent quality as a DNN trained with 170ms, while introducing increased processing time of only 5ms. Subjective tests using the BiLSTM show that reducing the future context from 170ms to 50ms does not significantly decrease perceived realism. Below 50ms, the perceived realism begins to deteriorate, generating a trade-off between realism and latency.
引用
收藏
页码:2479 / 2483
页数:5
相关论文
共 50 条
  • [41] Real-time animation of realistic virtual humans
    Kalra, P
    Magnenat-Thalmann, N
    Moccozet, L
    Sannier, G
    Aubel, A
    Thalmann, D
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 1998, 18 (05) : 42 - 56
  • [42] INTERACTIVE GRAPHICS AND REAL-TIME ANIMATION FOR TELEVISION
    SHOUP, RG
    SMPTE JOURNAL, 1981, 90 (01): : 10 - 10
  • [43] ANIMATION PROTOTYPING OF REAL-TIME EMBEDDED SYSTEMS
    COOLING, JE
    HUGHES, TS
    MICROPROCESSORS AND MICROSYSTEMS, 1993, 17 (06) : 315 - 324
  • [44] A generic system for interactive real-time animation
    Duecker, M
    Lehrenfeld, G
    Mueller, W
    Tahedl, C
    INTERNATIONAL CONFERENCE AND WORKSHOP ON ENGINEERING OF COMPUTER-BASED SYSTEMS, PROCEEDINGS, 1997, : 263 - 270
  • [45] Real-time animation of dressed virtual humans
    Cordier, F
    Magnenat-Thalmann, N
    COMPUTER GRAPHICS FORUM, 2002, 21 (03) : 327 - +
  • [46] Animation control for real-time virtual humans
    Badler, NI
    Palmer, MS
    Bindiganavale, R
    COMMUNICATIONS OF THE ACM, 1999, 42 (08) : 64 - 73
  • [47] Simulation levels of detail for real-time animation
    Carlson, DA
    Hodgins, JK
    GRAPHICS INTERFACE '97 - PROCEEDINGS, 1997, : 1 - 8
  • [48] Real-time facial animation on mobile devices
    Weng, Yanlin
    Cao, Chen
    Hou, Qiming
    Zhou, Kun
    GRAPHICAL MODELS, 2014, 76 : 172 - 179
  • [49] Animation control for real-time virtual humans
    Dept. of Comp. and Info. Science, University of Pennsylvania, Philadelphia, PA, United States
    不详
    Commun ACM, 8 (65-73):
  • [50] Real-time animation of aircraft mounting structure
    Tappert, PM
    IMAC - PROCEEDINGS OF THE 16TH INTERNATIONAL MODAL ANALYSIS CONFERENCE, VOLS 1 AND 2, 1998, 3243 : 1416 - 1419