BSML: Bidirectional Sampling Aggregation-based Metric Learning for Low-resource Uyghur Few-shot Speaker Verification

被引:1
作者
Zi, Yunfei [1 ]
Xiong, Shengwu [1 ]
机构
[1] Wuhan Univ Technol, Sch Comp & Artificial Intelligence, 122 Luoshi Rd, Wuhan 430070, Hubei, Peoples R China
关键词
Uyghur; bidirectional sampling; metric learning; few-shot; speaker verification; low-resource language; limited data; RECOGNITION;
D O I
10.1145/3564782
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, text-independent speaker verification has remained a hot research topic, especially for the limited enrollment and/or test data. At the same time, due to the lack of sufficient training data, the study of low-resource few-shot speaker verification makes the models prone to overfitting and low accuracy of recognition. Therefore, a bidirectional sampling aggregation-based meta-metric learning method is proposed to solve the low-accuracy problem of speaker recognition in a low-resource environment with limited data, termed bidirectional sampling multi-scale Fisher feature fusion (BSML). First, the BSML method was used for effective feature enhancement in the feature extraction stage; second, a large number of similar and disjoint tasks were used to train the models to learn how to compare sample similarity; finally, new tasks were used to identify unknown samples by calculating the similarity of the samples. Extensive experiments are conducted on a short-duration text-independent speaker verification dataset generated from the THUYG-20 low-resource Uyghur with limited data, which comprised speech samples of diverse lengths. The experimental result has shown that the metric learning approach is effective in avoiding model overfitting and improving model generalization, with significant results in the identification of short-duration speaker verification in low-resource Uyghur with few-shot. It also demonstrates that BSML outperforms the state-of-the-art deep-embedding speaker recognition architectures and recent metric learning approach by at least 18%-67% in the few-shot test set. The ablation experiments further illustrate that our proposed approaches can achieve substantial improvement over prior methods and achieves better performance and generalization ability.
引用
收藏
页数:23
相关论文
共 51 条
  • [21] An overview of text-independent speaker recognition: From features to supervectors
    Kinnunen, Tomi
    Li, Haizhou
    [J]. SPEECH COMMUNICATION, 2010, 52 (01) : 12 - 40
  • [22] Improving Short Utterance Speaker Recognition by Modeling Speech Unit Classes
    Li, Lantian
    Wang, Dong
    Zhang, Chenhao
    Zheng, Thomas Fang
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2016, 24 (06) : 1129 - 1139
  • [23] Li RR, 2020, INT CONF ACOUST SPEE, P3522, DOI [10.1109/ICASSP40776.2020.9054111, 10.1109/icassp40776.2020.9054111]
  • [24] Automatic Speaker Recognition with Limited Data
    Li, Ruirui
    Jiang, Jyun-Yu
    Liu, Jiahao
    Hsieh, Chu-Cheng
    Wang, Wei
    [J]. PROCEEDINGS OF THE 13TH INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING (WSDM '20), 2020, : 340 - 348
  • [25] Li WB, 2019, AAAI CONF ARTIF INTE, P8642
  • [26] MFA: TDNN WITH MULTI-SCALE FREQUENCY-CHANNEL ATTENTION FOR TEXT-INDEPENDENT SPEAKER VERIFICATION WITH SHORT UTTERANCES
    Liu, Tianchi
    Das, Rohan Kumar
    Lee, Kong Aik
    Li, Haizhou
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 7517 - 7521
  • [27] GMM and CNN Hybrid Method for Short Utterance Speaker Recognition
    Liu, Zheli
    Wu, Zhendong
    Li, Tong
    Li, Jin
    Shen, Chao
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (07) : 3244 - 3252
  • [28] Marcus G., 2018, arXiv
  • [29] Mishra P., 2020, arXiv, DOI [10.48550/arXiv.2008.11088, DOI 10.48550/ARXIV.2008.11088]
  • [30] VoxCeleb: a large-scale speaker identification dataset
    Nagrani, Arsha
    Chung, Joon Son
    Zisserman, Andrew
    [J]. 18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, : 2616 - 2620