SELF-SUPERVISED PRETRAINING FOR ROBUST PERSONALIZED VOICE ACTIVITY DETECTION IN ADVERSE CONDITIONS

被引:2
作者
Bovbjerg, Holger Severin [1 ]
Jensen, Jesper [1 ,2 ]
Ostergaard, Jan [1 ]
Tan, Zheng-Hua [1 ,3 ]
机构
[1] Aalborg Univ, Dept Elect Syst, Aalborg, Denmark
[2] Oticon AS, Copenhagen, Denmark
[3] Pioneer Ctr AI, Copenhagen, Denmark
来源
2024 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2024) | 2024年
关键词
Self-Supervised Learning; Voice Activity Detection; Target Speaker; Deep Learning; SPEECH;
D O I
10.1109/ICASSP48485.2024.10447653
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
In this paper, we propose the use of self-supervised pretraining on a large unlabelled data set to improve the performance of a personalized voice activity detection (VAD) model in adverse conditions. We pretrain a long short-term memory (LSTM)-encoder using the autoregressive predictive coding (APC) framework and fine-tune it for personalized VAD. We also propose a denoising variant of APC, with the goal of improving the robustness of personalized VAD. The trained models are systematically evaluated on both clean speech and speech contaminated by various types of noise at different SNR-levels and compared to a purely supervised model. Our experiments show that self-supervised pretraining not only improves performance in clean conditions, but also yields models which are more robust to adverse conditions compared to purely supervised learning.
引用
收藏
页码:10126 / 10130
页数:5
相关论文
共 24 条
[1]  
Alisamir S, 2022, Arxiv, DOI arXiv:2209.11061
[2]  
Baevski A, 2020, ADV NEUR IN, V33
[3]   WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing [J].
Chen, Sanyuan ;
Wang, Chengyi ;
Chen, Zhengyang ;
Wu, Yu ;
Liu, Shujie ;
Chen, Zhuo ;
Li, Jinyu ;
Kanda, Naoyuki ;
Yoshioka, Takuya ;
Xiao, Xiong ;
Wu, Jian ;
Zhou, Long ;
Ren, Shuo ;
Qian, Yanmin ;
Qian, Yao ;
Zeng, Michael ;
Yu, Xiangzhan ;
Wei, Furu .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) :1505-1518
[4]  
Chung YA, 2020, INT CONF ACOUST SPEE, P3497, DOI [10.1109/ICASSP40776.2020.9054438, 10.1109/icassp40776.2020.9054438]
[5]  
Chung Yu-An, PROC INTERSPEECH 201, P146
[6]  
Ding S., 2020, P OD, P433
[7]   Personal VAD 2.0: Optimizing Personal Voice Activity Detection for On-Device Speech Recognition [J].
Ding, Shaojin ;
Rikhye, Rajeev ;
Liang, Qiao ;
He, Yanzhang ;
Wang, Quan ;
Narayanan, Arun ;
O'Malley, Tom ;
McGraw, Ian .
INTERSPEECH 2022, 2022, :3744-3748
[8]   Voice Activity Detection in the Wild: A Data-Driven Approach Using Teacher-Student Training [J].
Dinkel, Heinrich ;
Wang, Shuai ;
Xu, Xuenan ;
Wu, Mengyue ;
Yu, Kai .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 :1542-1555
[9]  
He Maokui, 2021, P INT, P2523
[10]   HuBERT: Self-Supervised Speech Representation Learning by Masked Prediction of Hidden Units [J].
Hsu, Wei-Ning ;
Bolte, Benjamin ;
Tsai, Yao-Hung Hubert ;
Lakhotia, Kushal ;
Salakhutdinov, Ruslan ;
Mohamed, Abdelrahman .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 :3451-3460