Multi-Source Domain Adaptation for Text-Independent Forensic Speaker Recognition

被引:16
|
作者
Wang, Zhenyu [1 ]
Hansen, John H. L. [1 ]
机构
[1] Univ Texas Dallas, Ctr Robust Speech Syst, Erik Jonsson Sch Engn, Richardson, TX 75080 USA
关键词
Speaker recognition; Forensics; Training; Adaptation models; Acoustics; Task analysis; Speech recognition; Discrepancy loss; forensics; multi-source domain adaptation; domain adversarial training; maximum mean discrepancy; moment-matching; speaker recognition; IDENTIFICATION; FRAMEWORK;
D O I
10.1109/TASLP.2021.3130975
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Adapting speaker recognition systems to new environments is a widely-used technique to improve a well-performing model learned from large-scale data towards a task-specific small-scale data scenarios. However, previous studies focus on single domain adaptation, which neglects a more practical scenario where training data are collected from multiple acoustic domains needed in forensic scenarios. Audio analysis for forensic speaker recognition offers unique challenges in model training with multi-domain training data due to location/scenario uncertainty and diversity mismatch between reference and naturalistic field recordings. It is also difficult to directly employ small-scale domain-specific data to train complex neural network architectures due to domain mismatch and performance loss. Fine-tuning is a commonly-used method for adaptation in order to retrain the model with weights initialized from a well-trained model. Alternatively, in this study, three novel adaptation methods based on domain adversarial training, discrepancy minimization, and moment-matching approaches are proposed to further promote adaptation performance across multiple acoustic domains. A comprehensive set of experiments are conducted to demonstrate that: 1) diverse acoustic environments do impact speaker recognition performance, which could advance research in audio forensics, 2) domain adversarial training learns the discriminative features which are also invariant to shifts between domains, 3) discrepancy-minimizing adaptation achieves effective performance simultaneously across multiple acoustic domains, and 4) moment-matching adaptation along with dynamic distribution alignment also significantly promotes speaker recognition performance on each domain, especially for the LENA-field domain with noise compared to all other systems. Advancements shown here in adaptation therefore helper ensure more consistent performance for field operational data in audio forensics.
引用
收藏
页码:60 / 75
页数:16
相关论文
共 50 条
  • [1] Cross-domain Adaptation with Discrepancy Minimization for Text-independent Forensic Speaker Verification
    Wang, Zhenyu
    Xia, Wei
    Hansen, John H. L.
    INTERSPEECH 2020, 2020, : 2257 - 2261
  • [2] Compensation for domain mismatch in text-independent speaker recognition
    Bahmaninezhad, Fahimeh
    Hansen, John H. L.
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 1071 - 1075
  • [3] Multi-Source Domain Adaptation and Fusion for Speaker Verification
    Zhu, Donghui
    Chen, Ning
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2022, 30 : 2103 - 2116
  • [4] Improving Text-independent Speaker Recognition with GMM
    Chakroun, Rania
    Zouari, Leila Beltaifa
    Frikha, Mondher
    Ben Hamida, Ahmed
    2016 2ND INTERNATIONAL CONFERENCE ON ADVANCED TECHNOLOGIES FOR SIGNAL AND IMAGE PROCESSING (ATSIP), 2016, : 693 - 696
  • [5] Collaborative and adversarial network for text-independent speaker verification in domain adaptation
    Qiang, Junhao
    Yang, Qun
    Gao, Jie
    Liu, Shaohan
    ELECTRONICS LETTERS, 2023, 59 (02)
  • [6] Effect of Spoken Text on Text-independent Speaker Recognition
    Alsulaiman, Mansour
    PROCEEDINGS FIFTH INTERNATIONAL CONFERENCE ON INTELLIGENT SYSTEMS, MODELLING AND SIMULATION, 2014, : 279 - 284
  • [7] An Improved Approach for Text-Independent Speaker Recognition
    Chakroun, Rania
    Zouari, Leila Beltaifa
    Frikha, Mondher
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2016, 7 (08) : 343 - 348
  • [8] Multi-resolution form of SVD for text-independent speaker recognition
    Lung, SY
    PATTERN RECOGNITION, 2002, 35 (07) : 1637 - 1639
  • [9] Supervised domain adaptation for text-independent speaker verification using limited data
    Sarfjoo, Seyyed Saeed
    Madikeri, Srikanth
    Motlicek, Petr
    Marcel, Sebastien
    INTERSPEECH 2020, 2020, : 3815 - 3819
  • [10] Speaker-specific mapping for text-independent speaker recognition
    Misra, H
    Ikbal, S
    Yegnanarayana, B
    SPEECH COMMUNICATION, 2003, 39 (3-4) : 301 - 310