ADAPTING SPEECH SEPARATION TO REAL-WORLD MEETINGS USING MIXTURE INVARIANT TRAINING

被引:7
作者
Sivaraman, Aswin [1 ,2 ]
Wisdom, Scott [1 ]
Erdogan, Hakan [1 ]
Hershey, John R. [1 ]
机构
[1] Google Res, Mountain View, CA 94043 USA
[2] Indiana Univ, Bloomington, IN 47405 USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
关键词
source separation; unsupervised learning; mixture invariant training; real-world audio processing;
D O I
10.1109/ICASSP43922.2022.9747855
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
The recently-proposed mixture invariant training (MixIT) is an unsupervised method for training single-channel sound separation models because it does not require ground-truth isolated reference sources. In this paper, we investigate using MixIT to adapt a separation model on real far-field overlapping reverberant and noisy speech data from the AMI Corpus. The models are tested on real AMI recordings containing overlapping speech, and are evaluated subjectively by human listeners. To objectively evaluate our models, we also devise a synthetic AMI test set. For human evaluations on real recordings, we also propose a modification of the standard MUSHRA protocol to handle imperfect reference signals, which we call MUSHIRA. Holding network architectures constant, we find that a fine-tuned semi-supervised model yields the largest SI-SNR improvement, PESQ scores, and human listening ratings across synthetic and real datasets, outperforming unadapted generalist models trained on orders of magnitude more data. Our results show that unsupervised learning through MixIT enables model adaptation on real-world unlabeled spontaneous speech recordings.
引用
收藏
页码:686 / 690
页数:5
相关论文
共 22 条
  • [1] [Anonymous], 2015, ICLR
  • [2] Barker J, 2015, 2015 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING (ASRU), P504, DOI 10.1109/ASRU.2015.7404837
  • [3] Carletta J, 2005, LECT NOTES COMPUT SC, V3869, P28
  • [4] Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation
    Chen, Jingjing
    Mao, Qirong
    Liu, Dong
    [J]. INTERSPEECH 2020, 2020, : 2642 - 2646
  • [5] Cosentino J, 2020, Arxiv, DOI arXiv:2005.11262
  • [6] Gemmeke JF, 2017, INT CONF ACOUST SPEE, P776, DOI 10.1109/ICASSP.2017.7952261
  • [7] Hershey JR, 2016, INT CONF ACOUST SPEE, P31, DOI 10.1109/ICASSP.2016.7471631
  • [8] Joint Optimization of Masks and Deep Recurrent Neural Networks for Monaural Source Separation
    Huang, Po-Sen
    Kim, Minje
    Hasegawa-Johnson, Mark
    Smaragdis, Paris
    [J]. IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2015, 23 (12) : 2136 - 2147
  • [9] ITU, 2014, BS.1534
  • [10] Kavalerov I, 2019, IEEE WORK APPL SIG, P175, DOI [10.1109/waspaa.2019.8937253, 10.1109/WASPAA.2019.8937253]