INCREMENTAL SEMI-SUPERVISED LEARNING FOR MULTI-GENRE SPEECH RECOGNITION

被引:0
作者
Khonglah, Banriskhem [1 ]
Madikeri, Srikanth [1 ]
Dey, Subhadeep [1 ]
Bourlard, Herve [1 ]
Motlicek, Petr [1 ]
Billa, Jayadev [2 ]
机构
[1] Idiap Res Inst, Martigny, Switzerland
[2] Univ Southern Calif, Informat Sci Inst, Los Angeles, CA 90007 USA
来源
2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING | 2020年
基金
欧盟地平线“2020”;
关键词
semi-supervised learning; incremental training; multi-genre speech recognition;
D O I
10.1109/icassp40776.2020.9054309
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this work, we explore a data scheduling strategy for semi-supervised learning (SSL) for acoustic modeling in automatic speech recognition. The conventional approach uses a seed model trained with supervised data to automatically recognize the entire set of unlabeled (auxiliary) data to generate new labels for subsequent acoustic model training. In this paper, we propose an approach in which the unlabelled set is divided into multiple equal-sized subsets. These subsets are processed in an incremental fashion: for each iteration a new subset is added to the data used for SSL, starting from only one subset in the first iteration. The acoustic model from the previous iteration becomes the seed model for the next one. This scheduling strategy is compared to the approach employing all unlabeled data in one-shot for training. Experiments using lattice-free maximum mutual information based acoustic model training on Fisher English gives 80% word error recovery rate. On the multi-genre evaluation sets on Lithuanian and Bulgarian relative improvements of up to 17.2% in word error rate are observed.
引用
收藏
页码:7419 / 7423
页数:5
相关论文
共 23 条
  • [1] Bagchi D, 2019, INT CONF ACOUST SPEE, P6051, DOI [10.1109/icassp.2019.8683491, 10.1109/ICASSP.2019.8683491]
  • [2] Boschee E, 2019, PROCEEDINGS OF THE 57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: SYSTEM DEMONSTRATIONS, (ACL 2019), P19
  • [3] Untranscribed web audio for low resource speech recognition
    Carmantini, Andrea
    Bell, Peter
    Renals, Steve
    [J]. INTERSPEECH 2019, 2019, : 226 - 230
  • [4] Grézl F, 2013, 2013 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), P470, DOI 10.1109/ASRU.2013.6707775
  • [5] Imseng David, 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), P2322, DOI 10.1109/ICASSP.2014.6854014
  • [6] Imseng David, 2013, P IEEE WORKSHOP AUTO
  • [7] Lightly supervised and unsupervised acoustic model training
    Lamel, L
    Gauvain, JL
    Adda, G
    [J]. COMPUTER SPEECH AND LANGUAGE, 2002, 16 (01) : 115 - 129
  • [8] Ma Jeff, 2006, P INT C AC SPEECH SI, V3, pIII
  • [9] Manohar V, 2018, IEEE W SP LANG TECH, P250, DOI 10.1109/SLT.2018.8639635
  • [10] Manohar V, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P4844, DOI 10.1109/ICASSP.2018.8462331