Multimodal analysis of speech and arm motion for prosody-driven synthesis of beat gestures

被引:19
作者
Bozkurt, Elif [1 ]
Yemez, Yucel [1 ]
Erzin, Engin [1 ]
机构
[1] Koc Univ, Multimedia Vis & Graph Lab, Coll Engn, TR-34450 Istanbul, Turkey
关键词
Joint analysis of speech and gesture; Speech-driven gesture animation; Prosody-driven gesture synthesis; Speech rhythm; Unit selection; Hidden semi-Markov models; UTTERANCES;
D O I
10.1016/j.specom.2016.10.004
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
We propose a framework for joint analysis of speech prosody and arm motion towards automatic synthesis and realistic animation of beat gestures from speech prosody and rhythm. In the analysis stage, we first segment motion capture data and speech audio into gesture phrases and prosodic units via temporal clustering, and assign a class label to each resulting gesture phrase and prosodic unit. We then train a discrete hidden semi-Markov model (HSMM) over the segmented data, where gesture labels are hidden states with duration statistics and frame-level prosody labels are observations. The HSMM structure allows us to effectively map sequences of shorter duration prosodic units to longer duration gesture phrases. In the analysis stage, we also construct a gesture pool consisting of gesture phrases segmented from the available dataset, where each gesture phrase is associated with a class label and speech rhythm representation. In the synthesis stage, we use a modified Viterbi algorithm with a duration model, that decodes the optimal gesture label sequence with duration information over the HSMM, given a sequence of prosody labels. In the animation stage, the synthesized gesture label sequence with duration and speech rhythm information is mapped into a motion sequence by using a multiple objective unit selection algorithm. Our framework is tested using two multimodal datasets in speaker-dependent and independent settings. The resulting motion sequence when accompanied with the speech input yields natural-looking and plausible animations. We use objective evaluations to set parameters of the proposed prosody-driven gesture animation system, and subjective evaluations to assess quality of the resulting animations. The conducted subjective evaluations show that the difference between the proposed HSMM based synthesis and the motion capture synthesis is not statistically significant. Furthermore, the proposed HSMM based synthesis is evaluated significantly better than a baseline synthesis which animates random gestures based on only joint angle continuity. (C) 2016 Elsevier B.V. All rights reserved.
引用
收藏
页码:29 / 42
页数:14
相关论文
共 57 条
[1]  
Albrecht I, 2002, ADVANCES IN MODELLING, ANIMATION AND RENDERING, P283
[2]   Automatic prosodic event detection using acoustic, lexical, and syntactic evidence [J].
Ananthakrishnan, Sankaranarayanan ;
Narayanan, Shrikanth S. .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2008, 16 (01) :216-228
[3]  
[Anonymous], 2020, CASE DISAPPEARING DE
[4]  
[Anonymous], 2012, Journal of the Association for Laboratory Phonology, DOI DOI 10.1515/LP-2012-0006
[5]  
[Anonymous], 1996, CAMBRIDGE STUDIES LI
[6]  
Barbu V., 2008, Semi-Markov Chains and Hidden Semi-Markov Models Toward Applications: Their Use in Reliability and DNA Analysis
[7]   Windowed cross-correlation and peak picking for the analysis of variability in the association between behavioral time series [J].
Boker, SM ;
Xu, MQ ;
Rotondo, JL ;
King, K .
PSYCHOLOGICAL METHODS, 2002, 7 (03) :338-355
[8]  
Bolt R. A., 1980, Computer Graphics, V14, P262, DOI 10.1145/965105.807503
[9]   EDWARD - FULL INTEGRATION OF LANGUAGE AND ACTION IN A MULTIMODAL USER-INTERFACE [J].
BOX, E ;
HULS, C ;
CLAASSEN, W .
INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 1994, 40 (03) :473-495
[10]  
Bozkurt E, 2013, INT CONF ACOUST SPEE, P3652, DOI 10.1109/ICASSP.2013.6638339