E2E-based Multi-task Learning Approach to Joint Speech and Accent Recognition

被引:15
作者
Zhang, Jicheng [1 ]
Peng, Yizhou [1 ]
Pham, Van Tung [2 ]
Xu, Haihua [2 ]
Huang, Hao [1 ]
Chng, Eng Siong [2 ]
机构
[1] Xinjiang Univ, Sch Informat Sci & Engn, Urumqi, Peoples R China
[2] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore, Singapore
来源
INTERSPEECH 2021 | 2021年
基金
国家重点研发计划;
关键词
End-to-end; speech recognition; accent recognition; joint training; multi-task learning;
D O I
10.21437/Interspeech.2021-1495
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
In this paper, we propose a single multi-task learning framework to perform End-to-End (E2E) speech recognition (ASR) and accent recognition (AR) simultaneously. The proposed framework is not only more compact but can also yield comparable or even better results than standalone systems. Specifically, we found that the overall performance is predominantly determined by the ASR task, and the E2E-based ASR pretraining is essential to achieve improved performance, particularly for the AR task. Additionally, we conduct several analyses of the proposed method. First, though the objective loss for the AR task is much smaller compared with its counterpart of ASR task, a smaller weighting factor with the AR task in the joint objective function is necessary to yield better results for each task. Second, we found that sharing only a few layers of the encoder yields better AR results than sharing the overall encoder. Experimentally, the proposed method produces WER results close to the best standalone E2E ASR ones, while it achieves 7.7% and 4.2% relative improvement over standalone and single-task-based joint recognition methods on test set for accent recognition respectively.
引用
收藏
页码:1519 / 1523
页数:5
相关论文
共 33 条
[1]  
Arasteh Soroosh Tayebi, 2020, ARXIV201104896
[2]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[3]  
Berkling Kay, 1998, 5 INT C SPOKK LANG P
[4]  
Dong LH, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P5884, DOI 10.1109/ICASSP.2018.8462506
[5]  
Fan Z., 2020, ARXIV201206185
[6]  
Ghahremani Pegah, 2014, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), P2494, DOI 10.1109/ICASSP.2014.6854049
[7]  
Ghorbani S, 2018, IEEE W SP LANG TECH, P29, DOI 10.1109/SLT.2018.8639566
[8]  
Ghorbani Shahram, 2019, ARXIV190409038
[9]  
Graves Alex, 2006, P 23 INT C MACH LEAR, DOI [DOI 10.1145/1143844.1143891, 10.1145/1143844.1143891]
[10]   Large-Scale End-to-End Multilingual Speech Recognition and Language Identification with Multi-Task Learning [J].
Hou, Wenxin ;
Dong, Yue ;
Zhuang, Bairong ;
Yang, Longfei ;
Shi, Jiatong ;
Shinozaki, Takahiro .
INTERSPEECH 2020, 2020, :1037-1041