Towards Building ASR Systems for the Next Billion Users

被引:0
作者
Javed, Tahir [1 ,2 ]
Doddapaneni, Sumanth [2 ,4 ]
Raman, Abhigyan [2 ]
Bhogale, Kaushal Santosh [2 ]
Ramesh, Gowtham [2 ,4 ]
Kunchukuttan, Anoop [2 ,3 ]
Kumar, Pratyush [2 ,3 ]
Khapra, Mitesh M. [1 ,2 ,4 ]
机构
[1] IIT Madras, Madras, Tamil Nadu, India
[2] AI4Bharat, Chennai, Tamil Nadu, India
[3] Microsoft, Redmond, WA USA
[4] RBCDSAI, Chennai, Tamil Nadu, India
来源
THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE | 2022年
关键词
RECOGNITION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent methods in speech and language technology pretrain very large models which are fine-tuned for specific tasks. However, the benefits of such large models are often limited to a few resource rich languages of the world. In this work, we make multiple contributions towards building ASR systems for low resource languages from the Indian subcontinent. First, we curate 17,000 hours of raw speech data for 40 Indian languages from a wide variety of domains including education, news, technology, and finance. Second, using this raw speech data we pretrain several variants of wav2vec style models for 40 Indian languages. Third, we analyze the pretrained models to find key features: codebook vectors of similar sounding phonemes are shared across languages, representations across layers are discriminative of the language family, and attention heads often pay attention within small local windows. Fourth, we fine-tune this model for downstream ASR for 9 languages and obtain state-of-the-art results on 3 public datasets, including on very low-resource languages such as Sinhala and Nepali. Our work establishes that multilingual pretraining is an effective strategy for building ASR systems for the linguistically diverse speakers of the Indian subcontinent.
引用
收藏
页码:10813 / 10821
页数:9
相关论文
共 36 条
  • [1] [Anonymous], 2021, ICASSP 2021 2021 IEE, DOI DOI 10.1109/ICASSP39728.2021.9414961
  • [2] Arivazhagan Naveen, 2019, Massively Multilingual Neural Machine Translation in the Wild: Findings and Challenges
  • [3] Baevski A., 2020, ARXIV191103912
  • [4] Baevski Alexei, 2020, Advances in neural information processing systems
  • [5] ISI ASR System for the Low Resource Speech Recognition Challenge for Indian Languages
    Billa, Jayadev
    [J]. 19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 3207 - 3211
  • [6] Conneau A., 2020, ARXIV200613979
  • [7] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [8] Diwan A., 2021, P INTERSPEECH
  • [9] Emeneau M. B., 2018, INTERSPEECH 2018, P3197
  • [10] Graves A., 2006, MACHINE LEARNING P 2, P369, DOI [DOI 10.1145/1143844.1143891, 10.1145/1143844.1143891]