Cross-corpora spoken language identification with domain diversification and generalization

被引:9
作者
Dey, Spandan [1 ]
Sahidullah, Md [2 ]
Saha, Goutam [1 ]
机构
[1] Indian Inst Technol Kharagpur, Dept Elect & Elect Commun Engn, Kharagpur 721302, India
[2] Univ Lorraine, CNRS, Inria, LORIA, F-54000 Nancy, France
关键词
Language identification (LID); Cross-corpora evaluation; Audio augmentation; Domain generalization; Domain adversarial training; Multitask learning; NEURAL-NETWORKS; RECOGNITION; ADAPTATION; INFORMATION; VOICE;
D O I
10.1016/j.csl.2023.101489
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work addresses the cross-corpora generalization issue for the low-resourced spoken lan-guage identification (LID) problem. We have conducted the experiments in the context of Indian LID and identified strikingly poor cross-corpora generalization due to corpora-dependent non -lingual biases. Our contribution to this work is twofold. First, we propose domain diversification, which diversifies the limited training data using different audio data augmentation methods. We then propose the concept of maximally diversity-aware cascaded augmentations and optimize the augmentation fold-factor for effective diversification of the training data. Second, we introduce the idea of domain generalization considering the augmentation methods as pseudo -domains. Towards this, we investigate both domain-invariant and domain-aware approaches. Our LID system is based on the state-of-the-art emphasized channel attention, propagation, and aggregation based time delay neural network (ECAPA-TDNN) architecture. We have conducted extensive experiments with three widely used corpora for Indian LID research. In addition, we conduct a final blind evaluation of our proposed methods on the Indian subset of VoxLingua107 corpus collected in the wild. Our experiments demonstrate that the proposed domain diversi-fication is more promising over commonly used simple augmentation methods. The study also reveals that domain generalization is a more effective solution than domain diversification. We also notice that domain-aware learning performs better for same-corpora LID, whereas domain-invariant learning is more suitable for cross-corpora generalization. Compared to basic ECAPA-TDNN, its proposed domain-invariant extensions improve the cross-corpora EER up to 5.23%. In contrast, the proposed domain-aware extensions also improve performance for same-corpora test scenarios.
引用
收藏
页数:24
相关论文
共 106 条
[1]  
Adi Y, 2019, INT CONF ACOUST SPEE, P3742, DOI 10.1109/ICASSP.2019.8682468
[2]  
Alumae T., 2022, P OD, P240
[3]  
[Anonymous], 2011, IEEE WORKSHOP AUTOMA
[4]   ITU-T recommendation G.729 Annex B: A silence compression scheme for use with G.729 optimized for V.70 digital simultaneous voice and data applications [J].
Benyassine, A ;
Shlomot, E ;
Su, HY ;
Massaloux, D ;
Lamblin, C ;
Petit, JP .
IEEE COMMUNICATIONS MAGAZINE, 1997, 35 (09) :64-73
[5]  
Berouti M., 1979, ICASSP 79. 1979 IEEE International Conference on Acoustics, Speech and Signal Processing, P208
[6]   RealVAD: A Real-World Dataset and A Method for Voice Activity Detection by Body Motion Analysis [J].
Beyan, Cigdem ;
Shahid, Muhammad ;
Murino, Vittorio .
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 :2071-2085
[7]  
Blanchard Gilles, 2011, Advances in neural information processing systems, V24
[8]  
Brookes M., 1997, SOFTWARE, P47
[9]   Application-independent evaluation of speaker detection [J].
Brümmer, N ;
du Preez, J .
COMPUTER SPEECH AND LANGUAGE, 2006, 20 (2-3) :230-275
[10]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75