SCALING END-TO-END MODELS FOR LARGE-SCALE MULTILINGUAL ASR

被引:14
作者
Li, Bo [1 ]
Pang, Ruoming [1 ]
Sainath, Tara N. [1 ]
Gulati, Anmol [1 ]
Zhang, Yu [1 ]
Qin, James [1 ]
Haghani, Parisa [1 ]
Huang, W. Ronny [1 ]
Ma, Min [1 ]
Bai, Junwen [1 ]
机构
[1] Google, Mountain View, CA 94043 USA
来源
2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU) | 2021年
关键词
large-scale; multilingual speech recognition;
D O I
10.1109/ASRU51503.2021.9687871
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Building ASR models across many languages is a challenging multitask learning problem due to large variations and heavily unbalanced data. Existing work has shown positive transfer from high resource to low resource languages. However, degradations on high resource languages are commonly observed due to interference from the heterogeneous multilingual data and reduction in per-language capacity. We conduct a capacity study on a 15-language task, with the amount of data per language varying from 7.6K to 53.5K hours. We adopt GShard [1] to efficiently scale up to 10B parameters. Empirically, we find that (1) scaling the number of model parameters is an effective way to solve the capacity bottleneck - our 500M-param model already outperforms monolingual baselines and scaling it to 1B and 10B brought further quality gains; (2) larger models are not only more data efficient, but also more efficient in terms of training cost as measured in TPU days - the 1B -param model reaches the same accuracy at 34% of training time as the 500M-param model; (3) given a fixed capacity budget, adding depth works better than width and large encoders do better than large decoders; (4) with continuous training, they can be adapted to new languages and domains.
引用
收藏
页码:1011 / 1018
页数:8
相关论文
共 43 条
  • [11] Graves Alex, 2012, CoRR
  • [12] Conformer: Convolution-augmented Transformer for Speech Recognition
    Gulati, Anmol
    Qin, James
    Chiu, Chung-Cheng
    Parmar, Niki
    Zhang, Yu
    Yu, Jiahui
    Han, Wei
    Wang, Shibo
    Zhang, Zhengdong
    Wu, Yonghui
    Pang, Ruoming
    [J]. INTERSPEECH 2020, 2020, : 5036 - 5040
  • [13] Han Wei, 2020, P INTERSPEECH
  • [14] He YZ, 2019, INT CONF ACOUST SPEE, P6381, DOI [10.1109/ICASSP.2019.8682336, 10.1109/icassp.2019.8682336]
  • [15] Hieronymus James L, 1993, JIPA, V23, P72
  • [16] Deep Neural Networks for Acoustic Modeling in Speech Recognition
    Hinton, Geoffrey
    Deng, Li
    Yu, Dong
    Dahl, George E.
    Mohamed, Abdel-rahman
    Jaitly, Navdeep
    Senior, Andrew
    Vanhoucke, Vincent
    Patrick Nguyen
    Sainath, Tara N.
    Kingsbury, Brian
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2012, 29 (06) : 82 - 97
  • [17] Large-Scale End-to-End Multilingual Speech Recognition and Language Identification with Multi-Task Learning
    Hou, Wenxin
    Dong, Yue
    Zhuang, Bairong
    Yang, Longfei
    Shi, Jiatong
    Shinozaki, Takahiro
    [J]. INTERSPEECH 2020, 2020, : 1037 - 1041
  • [18] International Phonetic Association International Phonetic Association Staff, 1999, HDB INT PHON ASS GUI
  • [19] Kannan Anjuli, 2019, P INTERSPEECH
  • [20] Kaplan Jared, 2020, Scaling Laws for Neural Language Models