End-to-end Domain-Adversarial Voice Activity Detection

被引:20
作者
Lavechin, Marvin [1 ]
Gill, Marie-Philippe [2 ]
Bousbib, Ruben [1 ]
Bredin, Herve [3 ]
Garcia-Perera, Leibny Paola [4 ]
机构
[1] PSL, Ecole Normale Super, INRIA, Cognit Machine Learning Team, Paris, France
[2] Univ Quebec, Ecole Technol Super, Montreal, PQ, Canada
[3] Univ Paris Sud, Univ Paris Saclay, LIMSI, CNRS, Orsay, France
[4] Johns Hopkins Univ, Ctr Language & Speech Proc, Baltimore, MD USA
来源
INTERSPEECH 2020 | 2020年
关键词
voice activity detection; domain adversarial training; sincnet; long short-term memory;
D O I
10.21437/Interspeech.2020-2285
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Voice activity detection is the task of detecting speech regions in a given audio stream or recording. First, we design a neural network combining trainable filters and recurrent layers to tackle voice activity detection directly from the waveform. Experiments on the challenging DIHARD dataset show that the proposed end-to-end model reaches state-of-the-art performance and outperforms a variant where trainable filters are replaced by standard cepstral coefficients. Our second contribution aims at making the proposed voice activity detection model robust to domain mismatch. To that end, a domain classification branch is added to the network and trained in an adversarial manner. The same DIHARD dataset, drawn from 11 different domains is used for evaluation under two scenarios. In the in-domain scenario where the training and test sets cover the exact same domains, we show that the domain-adversarial approach does not degrade performance of the proposed end-to-end model. In the out-domain scenario where the test domain is different from training domains, it brings a relative improvement of more than 10%. Finally, our last contribution is the provision of a fully reproducible open-source pipeline than can be easily adapted to other datasets.
引用
收藏
页码:3685 / 3689
页数:5
相关论文
共 19 条
[1]  
Ajakan H., 2016, Journal of Machine Learning Research, V17, P2096, DOI 10.48550/arXiv.1505.07818
[2]  
[Anonymous], 2020, ICASSP
[3]  
Bredin H., 2017, INTERSPEECH
[4]   Bayesian HMM based x-vector clustering for Speaker Diarization [J].
Diez, Mireia ;
Burget, Lukas ;
Wang, Shuai ;
Rohdin, Johan ;
Cernocky, Jan .
INTERSPEECH 2019, 2019, :346-350
[5]  
GANIN Y, 2015, ICML, DOI DOI 10.48550/ARXIV.1409.7495
[6]   Optimization of RNN-Based Speech Activity Detection [J].
Gelly, Gregory ;
Gauvain, Jean-Luc .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2018, 26 (03) :646-656
[7]  
Hu K., 2019, ADVERSARIAL TRAINING
[8]  
Liu AH, 2019, INT CONF ACOUST SPEE, P6176, DOI [10.1109/icassp.2019.8683602, 10.1109/ICASSP.2019.8683602]
[9]  
Ravanelli M., 2018, IEEE W SP LANG TECH, DOI [10.1109/SLT.2018.8639585., DOI 10.1109/SLT.2018.8639585, 10.1109/SLT.2018.8639585]
[10]   The Second DIHARD Diarization Challenge: Dataset, task, and baselines [J].
Ryant, Neville ;
Church, Kenneth ;
Cieri, Christopher ;
Cristia, Alejandrina ;
Du, Jun ;
Ganapathy, Sriram ;
Liberman, Mark .
INTERSPEECH 2019, 2019, :978-982