Improved Speech Reconstruction from Silent Video

被引:53
作者
Ephrat, Ariel [1 ]
Halperin, Tavi [1 ]
Peleg, Shmuel [1 ]
机构
[1] Hebrew Univ Jerusalem, Jerusalem, Israel
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW 2017) | 2017年
基金
以色列科学基金会;
关键词
NETWORKS;
D O I
10.1109/ICCVW.2017.61
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speechreading is the task of inferring phonetic information from visually observed articulatory facial movements, and is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible and natural-sounding acoustic speech signal from silent video frames of a speaking person. We train our model on speakers from the GRID and TCD-TIMIT datasets, and evaluate the quality and intelligibility of reconstructed speech using common objective measurements. We show that speech predictions from the proposed model attain scores which indicate significantly improved quality over existing models. In addition, we show promising results towards reconstructing speech from an unconstrained dictionary.
引用
收藏
页码:455 / 462
页数:8
相关论文
共 42 条
[1]  
[Anonymous], 1993, DARPATIMIT ACOUSTIC
[2]  
[Anonymous], 2009, Ph.D. dissertation
[3]  
[Anonymous], 2017, arXiv:170401279
[4]  
[Anonymous], 2016 IEEE Winter Conf. Appl. Comput. Vision, DOI DOI 10.1109/WACV.2016.7477708
[5]  
[Anonymous], 2015, Highway networks
[6]  
[Anonymous], THESIS
[7]  
Assael Y. M., 2016, ARXIV161101599
[8]  
Bear HL, 2016, INT CONF ACOUST SPEE, P2009, DOI 10.1109/ICASSP.2016.7472029
[9]  
Burnham D., 2013, Hearing eye II: the psychology of speechreading and auditory-visual speech
[10]  
Chung Joon Son, 2016, ARXIV161105358, P2