UTTERANCE-LEVEL END-TO-END LANGUAGE IDENTIFICATION USING ATTENTION-BASED CNN-BLSTM

被引:0
作者
Cai, Weicheng [1 ,2 ]
Cai, Danwei [1 ]
Huang, Shen [3 ]
Li, Ming [1 ]
机构
[1] Duke Kunshan Univ, Data Sci Res Ctr, Kunshan, Peoples R China
[2] Sun Yat Sen Univ, Sch Elect & Informat Technol, Guangzhou, Guangdong, Peoples R China
[3] Tencent Res, Beijing, Peoples R China
来源
2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2019年
基金
中国国家自然科学基金;
关键词
Language identification; utterance-level; end-to-end; attention; CNN-BLSTM; SPEAKER; MACHINES;
D O I
10.1109/icassp.2019.8682386
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this paper, we present an end-to-end language identification framework, the attention-based Convolutional Neural Network-Bidirectional Long-short Term Memory ( CNN-BLSTM). The model is performed on the utterance level, which means the utterance-level decision can be directly obtained from the output of the neural network. To handle speech utterances with entire arbitrary and potentially long duration, we combine CNN-BLSTM model with a self-attentive pooling layer together. The front-end CNN-BLSTM module plays a role as local pattern extractor for the variable-length inputs, and the following self-attentive pooling layer is built on top to get the fixed-dimensional utterance-level representation. We conducted experiments on NIST LRE07 closed-set task, and the results reveal that the proposed attention-based CNN-BLSTM model achieves comparable error reduction with other state-of-the-art utterance-level neural network approaches for all 3 seconds, 10 seconds, 30 seconds duration tasks.
引用
收藏
页码:5991 / 5995
页数:5
相关论文
共 24 条
  • [1] [Anonymous], 2017, DEEP SPEAKER END TO
  • [2] [Anonymous], 2014, ABS14091259 CORR
  • [3] [Anonymous], 1990, Readings Speech Recognit, DOI DOI 10.1016/B978-0-08-051584-7.50037-1
  • [4] [Anonymous], P INT 2018
  • [5] [Anonymous], 2018, P SPEAK OD
  • [6] [Anonymous], P SPEAK OD 2018
  • [7] [Anonymous], 2018, P ICASSP 2018
  • [8] [Anonymous], P IEEE SLT 2017
  • [9] [Anonymous], INTERSPEECH
  • [10] [Anonymous], P INTERSPEECH 2014