Improving Automatic Speech Recognition Through Head Pose Driven Visual Grounding

被引:1
作者
Vosoughi, Soroush [1 ]
机构
[1] MIT, Media Lab, 75 Amherst St,E14-574K, Cambridge, MA 02139 USA
来源
32ND ANNUAL ACM CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2014) | 2014年
关键词
visual grounding; language models; automatic speech recognition; head pose estimation; visual attention;
D O I
10.1145/2556288.2556957
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we present a multimodal speech recognition system for real world scene description tasks. Given a visual scene, the system dynamically biases its language model based on the content of the visual scene and visual attention of the speaker. Visual attention is used to focus on likely objects within the scene. Given a spoken description the system then uses the visually biased language model to process the speech. The system uses head pose as a proxy for the visual attention of the speaker. Readily available standard computer vision algorithms are used to recognize the objects in the scene and automatic real-time head pose estimation is done using depth data captured via a Microsoft Kinect. The system was evaluated on multiple participants. Overall, incorporating visual information into the speech recognizer greatly improved speech recognition accuracy. The rapidly decreasing cost of 3D sensing technologies such as the Kinect allows systems with similar underlying principles to be used for many speech recognition tasks where there is visual information.
引用
收藏
页码:3235 / 3238
页数:4
相关论文
共 15 条
[11]  
Qvarfordt P, 2005, LECT NOTES COMPUT SC, V3585, P767, DOI 10.1007/11555261_61
[12]  
Qvarfordt P., 2005, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA, P221, DOI 10.1145/1054972.1055004
[13]  
Roy D., 2003, COMPUTER SPEECH LANG
[14]   Learning visually grounded words and syntax for a scene description task [J].
Roy, DK .
COMPUTER SPEECH AND LANGUAGE, 2002, 16 (3-4) :353-385
[15]  
Weide H., 1998, CMU PRONUNCIATION DI