IndicSUPERB: A Speech Processing Universal Performance Benchmark for Indian languages

被引:0
作者
Javed, Tahir [1 ,2 ]
Bhogale, Kaushal [1 ,2 ]
Raman, Abhigyan [2 ]
Kumar, Pratyush [2 ,3 ]
Kunchukuttan, Anoop [2 ,3 ]
Khapra, Mitesh M. [1 ,2 ]
机构
[1] Indian Inst Technol Madras, Chennai, Tamil Nadu, India
[2] AI4Bharat, Chennai, Tamil Nadu, India
[3] Microsoft, Redmond, WA USA
来源
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 11 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A cornerstone in AI research has been the creation and adoption of standardized training and test datasets to earmark the progress of state-of-the-art models. A particularly successful example is the GLUE dataset for training and evaluating Natural Language Understanding (NLU) models for English. The large body of research around self-supervised BERT-based language models revolved around performance improvements on NLU tasks in GLUE. To evaluate language models in other languages, several language-specific GLUE datasets were created. The area of speech language understanding (SLU) has followed a similar trajectory. The success of large self-supervised models such as wav2vec2 enable creation of speech models with relatively easy to access unlabelled data. These models can then be evaluated on SLU tasks, such as the SUPERB benchmark. In this work, we extend this to Indic languages by releasing the IndicSUPERB benchmark. Specifically, we make the following three contributions. (i) We collect Kathbath containing 1,684 hours of labelled speech data across 12 Indian languages from 1,218 contributors located in 203 districts in India. (ii) Using Kathbath, we create benchmarks across 6 speech tasks: Automatic Speech Recognition, Speaker Verification, Speaker Identification (mono/multi), Language Identification, Query By Example, and Keyword Spotting for 12 languages. (iii) On the released benchmarks, we train and evaluate different self-supervised models alongside the a commonly used baseline FBANK. We show that language-specific fine-tuned models are more accurate than baseline on most of the tasks, including a large gap of 76% for Language Identification task. However, for speaker identification, self-supervised models trained on large datasets demonstrate an advantage. We hope IndicSUPERB contributes to the progress of developing speech language understanding models for Indian languages.
引用
收藏
页码:12942 / 12950
页数:9
相关论文
共 30 条
  • [21] How Useful is Self-Supervised Pretraining for Visual Tasks?
    Newell, Alejandro
    Deng, Jia
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 7343 - 7352
  • [22] Panayotov V, 2015, INT CONF ACOUST SPEE, P5206, DOI 10.1109/ICASSP.2015.7178964
  • [23] Prahallad K, 2012, 13TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2012 (INTERSPEECH 2012), VOLS 1-3, P2545
  • [24] Exploring the use of Common Label Set to Improve Speech Recognition of Low Resource Indian Languages
    Shetty, Vishwas M.
    Umesh, S.
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7223 - 7227
  • [25] Snyder D, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P5329
  • [26] Srivastava B.M.L., 2018, SLTU, P11
  • [27] Tsai HS, 2022, Arxiv, DOI arXiv:2203.06849
  • [28] Wang A., 2018, P 2018 EMNLP WORKSH, P353, DOI [DOI 10.18653/V1/W18-5446, DOI 10.18653/V1/W18-5446,URL]
  • [29] Yang SW, 2021, Arxiv, DOI arXiv:2105.01051
  • [30] SUPERB: Speech processing Universal PERformance Benchmark
    Yang, Shu-wen
    Chi, Po-Han
    Chuang, Yung-Sung
    Lai, Cheng-I Jeff
    Lakhotia, Kushal
    Lin, Yist Y.
    Liu, Andy T.
    Shi, Jiatong
    Chang, Xuankai
    Lin, Guan-Ting
    Huang, Tzu-Hsien
    Tseng, Wei-Cheng
    Lee, Ko-tik
    Liu, Da-Rong
    Huang, Zili
    Done, Shuyan
    Li, Shang-Wen
    Watanabe, Shinji
    Mohamed, Abdelrahman
    Lee, Hung-yi
    [J]. INTERSPEECH 2021, 2021, : 1194 - 1198