Speech-Driven Facial Animations Improve Speech-in-Noise Comprehension of Humans

被引:7
作者
Varano, Enrico [1 ,2 ]
Vougioukas, Konstantinos [3 ]
Ma, Pingchuan [3 ]
Petridis, Stavros [3 ]
Pantic, Maja [3 ]
Reichenbach, Tobias [4 ]
机构
[1] Imperial Coll London, Dept Bioengn, London, England
[2] Imperial Coll London, Ctr Neurotechnol, London, England
[3] Imperial Coll London, Dept Comp, London, England
[4] Friedrich Alexander Univ Erlangen Nurnberg, Dept Artificial Intelligence Biomed Engn, Erlangen, Germany
基金
英国工程与自然科学研究理事会;
关键词
speech perception; audiovisual integration; speech in noise; facial animation; generative adversarial network (GAN); INVERSE EFFECTIVENESS; INTEGRATION;
D O I
10.3389/fnins.2021.781196
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Understanding speech becomes a demanding task when the environment is noisy. Comprehension of speech in noise can be substantially improved by looking at the speaker's face, and this audiovisual benefit is even more pronounced in people with hearing impairment. Recent advances in AI have allowed to synthesize photorealistic talking faces from a speech recording and a still image of a person's face in an end-to-end manner. However, it has remained unknown whether such facial animations improve speech-in-noise comprehension. Here we consider facial animations produced by a recently introduced generative adversarial network (GAN), and show that humans cannot distinguish between the synthesized and the natural videos. Importantly, we then show that the end-to-end synthesized videos significantly aid humans in understanding speech in noise, although the natural facial motions yield a yet higher audiovisual benefit. We further find that an audiovisual speech recognizer (AVSR) benefits from the synthesized facial animations as well. Our results suggest that synthesizing facial motions from speech can be used to aid speech comprehension in difficult listening environments.
引用
收藏
页数:9
相关论文
共 36 条
[1]  
Agelfors E, 2006, LECT NOTES COMPUT SC, V4061, P579
[2]   Perception of Audiovisual Speech Produced by Human and Virtual Speaker [J].
Aller, Sven ;
Meister, Einar .
HUMAN LANGUAGE TECHNOLOGIES - THE BALTIC PERSPECTIVE, 2016, 289 :31-38
[3]  
[Anonymous], 1993, NASA STI/Recon Technical Report N
[4]  
[Anonymous], 2019, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2019.00802
[5]  
Assael Yannis M, 2016, ARXIV161101599
[6]   Audiovisual Speech Synthesis [J].
G. Bailly ;
M. Bérar ;
F. Elisei ;
M. Odisio .
International Journal of Speech Technology, 2003, 6 (4) :331-346
[7]  
Beskow J., 2002, P FON 2002 MAY 29 31, V44
[8]   CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset [J].
Cao, Houwei ;
Cooper, David G. ;
Keutmann, Michael K. ;
Gur, Ruben C. ;
Nenkova, Ani ;
Verma, Ragini .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (04) :377-390
[9]   The Natural Statistics of Audiovisual Speech [J].
Chandrasekaran, Chandramouli ;
Trubanova, Andrea ;
Stillittano, Sebastien ;
Caplier, Alice ;
Ghazanfar, Asif A. .
PLOS COMPUTATIONAL BIOLOGY, 2009, 5 (07)
[10]  
Chung J.S., 2017, ARXIV170502966V2