Predicting synthetic voice style from facial expressions. An application for augmented conversations

被引:5
作者
Szekely, Eva [1 ]
Ahmed, Zeeshan [1 ]
Hennig, Shannon [2 ]
Cabral, Joao P. [1 ]
Carson-Berndsen, Julie [1 ]
机构
[1] Univ Coll Dublin, Sch Comp Sci & Informat, CNGL, Dublin 2, Ireland
[2] Ist Italian Tecnol, Genoa, Italy
基金
爱尔兰科学基金会;
关键词
Expressive speech synthesis; Facial expressions; Multimodal application; Augmentative and alternative communication; SPEECH; RESPONSES; EMOTIONS;
D O I
10.1016/j.specom.2013.09.003
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The ability to efficiently facilitate social interaction and emotional expression is an important, yet unmet requirement for speech generating devices aimed at individuals with speech impairment. Using gestures such as facial expressions to control aspects of expressive synthetic speech could contribute to an improved communication experience for both the user of the device and the conversation partner. For this purpose, a mapping model between facial expressions and speech is needed, that is high level (utterance-based), versatile and personalisable. In the mapping developed in this work, visual and auditory modalities are connected based on the intended emotional salience of a message: the intensity of facial expressions of the user to the emotional intensity of the synthetic speech. The mapping model has been implemented in a system called WinkTalk that uses estimated facial expression categories and their intensity values to automatically select between three expressive synthetic voices reflecting three degrees of emotional intensity. An evaluation is conducted through an interactive experiment using simulated augmented conversations. The results have shown that automatic control of synthetic speech through facial expressions is fast, non-intrusive, sufficiently accurate and supports the user to feel more involved in the conversation. It can be concluded that the system has the potential to facilitate a more efficient communication process between user and listener. (C) 2013 Elsevier B.V. All rights reserved.
引用
收藏
页码:63 / 75
页数:13
相关论文
共 50 条
[1]  
Al Moubayed S, 2011, LECT NOTES COMPUT SC, V6456, P55, DOI 10.1007/978-3-642-18184-9_6
[2]  
[Anonymous], 1990, Journal of the American Voice I/O Society
[3]  
[Anonymous], 2002, 7 INT C SPOK LANG PR
[4]  
[Anonymous], 2004, THESIS SAARLAND U
[5]  
[Anonymous], 2011, P ANN C ISCA
[6]  
[Anonymous], 1989, STL-QPSR
[7]  
[Anonymous], 2007, IMPROVED MODELING GL
[8]  
[Anonymous], COMPUTER SYNTHESISED
[9]  
[Anonymous], 2009, Text-to-speech synthesis
[10]  
Asha, 2005, ROL RESP SPEECH LANG