IMPROVING SELF-SUPERVISED LEARNING FOR SPEECH RECOGNITION WITH INTERMEDIATE LAYER SUPERVISION

被引:15
作者
Wang, Chengyi [1 ,2 ]
Wu, Yu [2 ]
Chen, Sanyuan [2 ]
Liu, Shujie [2 ]
Li, Jinyu [2 ]
Qian, Yao [2 ]
Yang, Zhenglu [1 ]
机构
[1] NanKai Univ, Tianjin, Peoples R China
[2] Microsoft Corp, Redmond, WA 98052 USA
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
关键词
Self-supervised learning; Automatic speech recognition; REPRESENTATION;
D O I
10.1109/ICASSP43922.2022.9747022
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Recently, pioneer work finds that self-supervised pre-training methods can improve multiple downstream speech tasks, because the model utilizes bottom layers to learn speaker-related information and top layers to encode content-related information. Since the network capacity is limited, we believe the speech recognition performance could be further improved if the model is dedicated to audio content information learning. To this end, we propose Intermediate Layer Supervision for Self-Supervised Learning (ILS-SSL), which forces the model to concentrate on content information as much as possible by adding an additional SSL loss on the intermediate layers. Experiments on LibriSpeech test-other set show that our method outperforms HuBERT significantly, which achieves a 23.5%/11.6% relative word error rate reduction in the w/o language model setting for Base/Large models. Detailed analysis shows the bottom layers of our model have a better correlation with phonetic units, which is consistent with our intuition and explains the success of our method for ASR. We will release our code and model at https://github.com/ microsoft/UniSpeech.
引用
收藏
页码:7092 / 7096
页数:5
相关论文
共 26 条
[1]  
Baevski A., 2021, Neural Information Processing System, P15
[2]  
Baevski A., 2020, P INT C LEARN REPR
[3]  
Baevski A., 2020, Advances in neural information processing systems
[4]  
Chen Sanyuan, 2021, ARXIV211013900
[5]   An Unsupervised Autoregressive Model for Speech Representation Learning [J].
Chung, Yu-An ;
Hsu, Wei-Ning ;
Tang, Hao ;
Glass, James .
INTERSPEECH 2019, 2019, :146-150
[6]  
Chung YA, 2020, INT CONF ACOUST SPEE, P3497, DOI [10.1109/icassp40776.2020.9054438, 10.1109/ICASSP40776.2020.9054438]
[7]  
Devlin Jacob, 2018, CoRR
[8]  
Hsu Wei-Ning, 1947, ARXIV210607447
[9]  
Kahn J, 2020, INT CONF ACOUST SPEE, P7084, DOI [10.1109/icassp40776.2020.9054295, 10.1109/ICASSP40776.2020.9054295]
[10]   DATA AUGMENTING CONTRASTIVE LEARNING OF SPEECH REPRESENTATIONS IN THE TIME DOMAIN [J].
Kharitonov, Eugene ;
Riviere, Morgane ;
Synnaeve, Gabriel ;
Wolf, Lior ;
Mazare, Pierre-Emmanuel ;
Douze, Matthijs ;
Dupoux, Emmanuel .
2021 IEEE SPOKEN LANGUAGE TECHNOLOGY WORKSHOP (SLT), 2021, :215-222