Multi-Source Domain Adaptation for Text-Independent Forensic Speaker Recognition

被引:16
|
作者
Wang, Zhenyu [1 ]
Hansen, John H. L. [1 ]
机构
[1] Univ Texas Dallas, Ctr Robust Speech Syst, Erik Jonsson Sch Engn, Richardson, TX 75080 USA
关键词
Speaker recognition; Forensics; Training; Adaptation models; Acoustics; Task analysis; Speech recognition; Discrepancy loss; forensics; multi-source domain adaptation; domain adversarial training; maximum mean discrepancy; moment-matching; speaker recognition; IDENTIFICATION; FRAMEWORK;
D O I
10.1109/TASLP.2021.3130975
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Adapting speaker recognition systems to new environments is a widely-used technique to improve a well-performing model learned from large-scale data towards a task-specific small-scale data scenarios. However, previous studies focus on single domain adaptation, which neglects a more practical scenario where training data are collected from multiple acoustic domains needed in forensic scenarios. Audio analysis for forensic speaker recognition offers unique challenges in model training with multi-domain training data due to location/scenario uncertainty and diversity mismatch between reference and naturalistic field recordings. It is also difficult to directly employ small-scale domain-specific data to train complex neural network architectures due to domain mismatch and performance loss. Fine-tuning is a commonly-used method for adaptation in order to retrain the model with weights initialized from a well-trained model. Alternatively, in this study, three novel adaptation methods based on domain adversarial training, discrepancy minimization, and moment-matching approaches are proposed to further promote adaptation performance across multiple acoustic domains. A comprehensive set of experiments are conducted to demonstrate that: 1) diverse acoustic environments do impact speaker recognition performance, which could advance research in audio forensics, 2) domain adversarial training learns the discriminative features which are also invariant to shifts between domains, 3) discrepancy-minimizing adaptation achieves effective performance simultaneously across multiple acoustic domains, and 4) moment-matching adaptation along with dynamic distribution alignment also significantly promotes speaker recognition performance on each domain, especially for the LENA-field domain with noise compared to all other systems. Advancements shown here in adaptation therefore helper ensure more consistent performance for field operational data in audio forensics.
引用
收藏
页码:60 / 75
页数:16
相关论文
共 50 条
  • [31] A deep learning approach for text-independent speaker recognition with short utterances
    Chakroun, Rania
    Frikha, Mondher
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (21) : 33111 - 33133
  • [32] Utilizing Tandem Features for Text-Independent Speaker Recognition on Short Utterances
    Alvarez, Arvin Kenneth
    Pelipas, Mary Tricia Ann
    Rayos del Sol, Carl Ivan
    Tomas, John Paul
    2019 2ND INTERNATIONAL CONFERENCE ON COMPUTING AND BIG DATA (ICCBD 2019), 2019, : 105 - 110
  • [33] Text-independent Speaker Identification in Birds
    Fox, E. J. S.
    Roberts, J. D.
    Bennamoun, M.
    INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, 2006, : 2122 - 2125
  • [34] A novel speech feature fusion algorithm for text-independent speaker recognition
    Ma, Biao
    Xu, Chengben
    Zhang, Ye
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (24) : 64139 - 64156
  • [35] Text-independent speaker recognition using probabilistic SVM with GMM adjustment
    Hou, FL
    Wang, BX
    2003 INTERNATIONAL CONFERENCE ON NATURAL LANGUAGE PROCESSING AND KNOWLEDGE ENGINEERING, PROCEEDINGS, 2003, : 305 - 308
  • [36] On Text-independent Speaker Recognition via Improved Vector Quantization Method
    Liu Ting-ting
    Guan Sheng-xiao
    2013 32ND CHINESE CONTROL CONFERENCE (CCC), 2013, : 3912 - 3916
  • [37] Wavelet feature domain adaptive noise reduction using learning algorithm for text-independent speaker recognition
    Lung, Shung-Yung
    PATTERN RECOGNITION, 2007, 40 (09) : 2603 - 2606
  • [38] Multi-Source Contribution Learning for Domain Adaptation
    Li, Keqiuyin
    Lu, Jie
    Zuo, Hua
    Zhang, Guangquan
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) : 5293 - 5307
  • [39] Text-dependent and text-independent speaker recognition of reverberant speech based on CNN
    El-Moneim, Samia Abd
    Sedik, Ahmed
    Nassar, M. A.
    El-Fishawy, Adel S.
    Sharshar, A. M.
    Hassan, Shaimaa E. A.
    Mahmoud, Adel Zaghloul
    Dessouky, Moawd I.
    El-Banby, Ghada M.
    El-Samie, Fathi E. Abd
    El-Rabaie, El-Sayed M.
    Neyazi, Badawi
    Seddeq, H. S.
    Ismail, Nabil A.
    Khalaf, Ashraf A. M.
    Elabyad, G. S. M.
    INTERNATIONAL JOURNAL OF SPEECH TECHNOLOGY, 2021, 24 (04) : 993 - 1006
  • [40] A deep learning approach for text-independent speaker recognition with short utterances
    Rania Chakroun
    Mondher Frikha
    Multimedia Tools and Applications, 2023, 82 : 33111 - 33133