A COMPARATIVE STUDY OF MODULAR AND JOINT APPROACHES FOR SPEAKER-ATTRIBUTED ASR ON MONAURAL LONG-FORM AUDIO

被引:1
作者
Kanda, Naoyuki [1 ]
Xiao, Xiong [1 ]
Wu, Jian [1 ]
Zhou, Tianyan [1 ]
Gaur, Yashesh [1 ]
Wang, Xiaofei [1 ]
Meng, Zhong [1 ]
Chen, Zhuo [1 ]
Yoshioka, Takuya [1 ]
机构
[1] Microsoft Corp, Redmond, WA 98052 USA
来源
2021 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU) | 2021年
关键词
Multi-speaker speech recognition; speaker counting; speaker identification; serialized output training; SPEECH SEPARATION; DATASET;
D O I
10.1109/ASRU51503.2021.9687974
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Speaker-attributed automatic speech recognition (SA-ASR) is a task to recognize "who spoke what" from multi-talker recordings. An SA-ASR system usually consists of multiple modules such as speech separation, speaker diarization and ASR. On the other hand, considering the joint optimization, an end-to-end (E2E) SA-ASR model has recently been proposed with promising results on simulation data. In this paper, we present our recent study on the comparison of such modular and joint approaches towards SA-ASR on real monaural recordings. We develop state-of-the-art SA-ASR systems for both modular and joint approaches by leveraging large-scale training data, including 75 thousand hours of ASR training data and the Vox-Celeb corpus for speaker representation learning. We also propose a new pipeline that performs the E2E SA-ASR model after speaker clustering. Our evaluation on the AMI meeting corpus reveals that after fine-tuning with a small real data, the joint system performs 8.9-29.9% better in accuracy compared to the best modular system while the modular system performs better before such fine-tuning. We also conduct various error analyses to show the remaining issues for the monaural SA-ASR.
引用
收藏
页码:296 / 303
页数:8
相关论文
共 55 条
[1]   Acoustic beamforming for speaker diarization of meetings [J].
Anguera, Xavier ;
Wooters, Chuck ;
Hernando, Javier .
IEEE TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2007, 15 (07) :2011-2022
[2]   The fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, task and baselines [J].
Barker, Jon ;
Watanabe, Shinji ;
Vincent, Emmanuel ;
Trmal, Jan .
19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, :1561-1565
[3]  
Carletta J, 2005, LECT NOTES COMPUT SC, V3869, P28
[4]  
Chang XK, 2020, INT CONF ACOUST SPEE, P6134, DOI [10.1109/ICASSP40776.2020.9054029, 10.1109/icassp40776.2020.9054029]
[5]  
Chang XK, 2019, 2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), P237, DOI [10.1109/asru46091.2019.9003986, 10.1109/ASRU46091.2019.9003986]
[6]  
Chang XK, 2019, INT CONF ACOUST SPEE, P6256, DOI 10.1109/ICASSP.2019.8682822
[7]  
Chang Xuankai, P ICASSP, P2021
[8]   CONTINUOUS SPEECH SEPARATION WITH CONFORMER [J].
Chen, Sanyuan ;
Wu, Yu ;
Chen, Zhuo ;
Wu, Jian ;
Li, Jinyu ;
Yoshioka, Takuya ;
Wang, Chengyi ;
Liu, Shujie ;
Zhou, Ming .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :5749-5753
[9]  
Chen Z, 2020, INT CONF ACOUST SPEE, P7284, DOI [10.1109/ICASSP40776.2020.9053426, 10.1109/icassp40776.2020.9053426]
[10]  
Chung JS, 2018, INTERSPEECH, P1086