SCALING END-TO-END MODELS FOR LARGE-SCALE MULTILINGUAL ASR

被引:14
作者
Li, Bo [1 ]
Pang, Ruoming [1 ]
Sainath, Tara N. [1 ]
Gulati, Anmol [1 ]
Zhang, Yu [1 ]
Qin, James [1 ]
Haghani, Parisa [1 ]
Huang, W. Ronny [1 ]
Ma, Min [1 ]
Bai, Junwen [1 ]
机构
[1] Google, Mountain View, CA 94043 USA
来源
2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU) | 2021年
关键词
large-scale; multilingual speech recognition;
D O I
10.1109/ASRU51503.2021.9687871
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Building ASR models across many languages is a challenging multitask learning problem due to large variations and heavily unbalanced data. Existing work has shown positive transfer from high resource to low resource languages. However, degradations on high resource languages are commonly observed due to interference from the heterogeneous multilingual data and reduction in per-language capacity. We conduct a capacity study on a 15-language task, with the amount of data per language varying from 7.6K to 53.5K hours. We adopt GShard [1] to efficiently scale up to 10B parameters. Empirically, we find that (1) scaling the number of model parameters is an effective way to solve the capacity bottleneck - our 500M-param model already outperforms monolingual baselines and scaling it to 1B and 10B brought further quality gains; (2) larger models are not only more data efficient, but also more efficient in terms of training cost as measured in TPU days - the 1B -param model reaches the same accuracy at 34% of training time as the 500M-param model; (3) given a fixed capacity budget, adding depth works better than width and large encoders do better than large decoders; (4) with continuous training, they can be adapted to new languages and domains.
引用
收藏
页码:1011 / 1018
页数:8
相关论文
共 43 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] Adams Oliver, 2019, ARXIV PREPRINT ARXIV
  • [3] [Anonymous], 2015, LISTEN ATTEND SPELL
  • [4] Arivazhagan Naveen, 2019, ARXIV190705019
  • [5] Brown Tom B, 2020, P ADV NEUR INF PROC
  • [6] Multitask learning
    Caruana, R
    [J]. MACHINE LEARNING, 1997, 28 (01) : 41 - 75
  • [7] Chuangsuwanich Ekapol, 2016, MULTILINGUAL TECHNIQ
  • [8] Conneau Alexis, 2020, P 58 ANN M ASS COMP, P8440
  • [9] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [10] Frankel J., 2001, 7 EUR C SPEECH COMM