AISHELL-4: An Open Source Dataset for Speech Enhancement, Separation, Recognition and Speaker Diarization in Conference Scenario

被引:26
作者
Fu, Yihui [1 ]
Cheng, Luyao [1 ]
Lv, Shubo [1 ]
Jv, Yukai [1 ]
Kong, Yuxiang [1 ]
Chen, Zhuo [2 ]
Hu, Yanxin [1 ]
Xie, Lei [1 ]
Wu, Jian [3 ]
Bu, Hui [4 ]
Xu, Xin [4 ]
Du, Jun [5 ]
Chen, Jingdong [1 ]
机构
[1] Northwestern Polytech Univ, Xian, Peoples R China
[2] Microsoft Corp, Redmond, WA 98052 USA
[3] Microsoft Corp, Beijing, Peoples R China
[4] Beijing Shell Shell Technol Co Ltd, Beijing, Peoples R China
[5] Univ Sci & Technol China, Hefei, Peoples R China
来源
INTERSPEECH 2021 | 2021年
关键词
AISHELL-4; speech front-end processing; speech recognition; speaker diarization; conference scenario; Mandarin; CORPUS; NOISY;
D O I
10.21437/Interspeech.2021-1397
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
In this paper, we present AISHELL-4, a sizable real-recorded Mandarin speech dataset collected by 8-channel circular microphone array for speech processing in conference scenario. The dataset consists of 211 recorded meeting sessions, each containing 4 to 8 speakers, with a total length of 120 hours. This dataset aims to bridge the advanced research on multi-speaker processing and the practical application scenario in three aspects. With real recorded meetings, AISHELL-4 provides realistic acoustics and rich natural speech characteristics in conversation such as short pause, speech overlap, quick speaker turn, noise, etc. Meanwhile, accurate transcription and speaker voice activity are provided for each meeting in AISHELL-4. This allows the researchers to explore different aspects in meeting processing, ranging from individual tasks such as speech front-end processing, speech recognition and speaker diarization, to multi-modality modeling and joint optimization of relevant tasks. Given most open source dataset for multi-speaker tasks are in English, AISHELL-4 is the only Mandarin dataset for conversation speech, providing additional value for data diversity in speech community. We also release a PyTorch-based training and evaluation framework as baseline system to promote reproducible research in this field.
引用
收藏
页码:3665 / 3669
页数:5
相关论文
共 34 条
[1]  
[Anonymous], 2004, P LREC
[2]  
Bu F, 2017, INT CONF ORANGE TECH, P1, DOI 10.1109/ICOT.2017.8336074
[3]  
Çetin Ö, 2006, INTERSPEECH 2006 AND 9TH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, VOLS 1-5, P293
[4]  
Chen S., 2020, ARXIV201012180
[5]  
Chen Z, 2020, INT CONF ACOUST SPEE, P7284, DOI [10.1109/ICASSP40776.2020.9053426, 10.1109/icassp40776.2020.9053426]
[6]  
Cosentino J., 2020, ARXIV200511262
[7]   ArcFace: Additive Angular Margin Loss for Deep Face Recognition [J].
Deng, Jiankang ;
Guo, Jia ;
Xue, Niannan ;
Zafeiriou, Stefanos .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :4685-4694
[8]  
Dong LH, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P5884, DOI 10.1109/ICASSP.2018.8462506
[9]  
Du J., 2018, AISHELL-2: Transforming Mandarin ASR Research Into Industrial Scale
[10]  
Fan Y, 2020, INT CONF ACOUST SPEE, P7604, DOI [10.1109/icassp40776.2020.9054017, 10.1109/ICASSP40776.2020.9054017]