Directed Speech Separation for Automatic Speech Recognition of Long-form Conversational Speech

被引:5
作者
Paturi, Rohit [1 ]
Srinivasan, Sundararajan [1 ]
Kirchhoff, Katrin [1 ]
Romero, Daniel Garcia [1 ]
机构
[1] Amazon AWS AI, Washington, DC 20052 USA
来源
INTERSPEECH 2022 | 2022年
关键词
Speech Separation; Speaker embeddings; Spectral clustering; ASR; deep learning;
D O I
10.21437/Interspeech.2022-10843
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Many of the recent advances in speech separation are primarily aimed at synthetic mixtures of short audio utterances with high degrees of overlap. Most of these approaches need an additional stitching step to stitch the separated speech chunks for long form audio. Since most of the approaches involve Permutation Invariant training (PIT), the order of separated speech chunks is nondeterministic and leads to difficulty in accurately stitching homogenous speaker chunks for downstream tasks like Automatic Speech Recognition (ASR). Also, most of these models are trained with synthetic mixtures and do not generalize to real conversational data. In this paper, we propose a speaker conditioned separator trained on speaker embeddings extracted directly from the mixed signal using an over-clustering based approach. This model naturally regulates the order of the separated chunks without the need for an additional stitching step. We also introduce a data sampling strategy with real and synthetic mixtures which generalizes well to real conversation speech. With this model and data sampling technique, we show significant improvements in speaker-attributed word error rate (SA-WER) on Hub5 data.
引用
收藏
页码:5388 / 5392
页数:5
相关论文
共 41 条
[1]  
Alexandra G., 1997, Web Download
[2]  
[Anonymous], AVAILABLE PART SPEEC
[3]   The fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, task and baselines [J].
Barker, Jon ;
Watanabe, Shinji ;
Vincent, Emmanuel ;
Trmal, Jan .
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, :1561-1565
[4]   Monaural Speech Separation Using Speaker Embedding From Preliminary Separation [J].
Byun, Jaeuk ;
Shin, Jong Won .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 :2753-2763
[5]  
Chang X., 2021, ARXIV210101853
[6]   Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation [J].
Chen, Jingjing ;
Mao, Qirong ;
Liu, Dong .
INTERSPEECH 2020, 2020, :2642-2646
[7]  
Chen Z, 2020, INT CONF ACOUST SPEE, P7284, DOI [10.1109/ICASSP40776.2020.9053426, 10.1109/icassp40776.2020.9053426]
[8]   In defence of metric learning for speaker recognition [J].
Chung, Joon Son ;
Huh, Jaesung ;
Mun, Seongkyu ;
Lee, Minjae ;
Heo, Hee-Soo ;
Choe, Soyeon ;
Ham, Chiheon ;
Jung, Sunghwan ;
Lee, Bong-Jin ;
Han, Icksang .
INTERSPEECH 2020, 2020, :2977-2981
[9]  
Cieri Christopher, 2002, WEB DOWNLOAD
[10]  
Cieri Christopher, 2005, WEB DOWNLOAD PHILADE