Insights on Neural Representations for End-to-End Speech Recognition

被引:5
作者
Ollerenshaw, Anna [1 ]
Jalal, Asif [1 ]
Hain, Thomas [1 ]
机构
[1] Univ Sheffield, Speech & Hearing Res Grp, Sheffield, S Yorkshire, England
来源
INTERSPEECH 2021 | 2021年
关键词
End-to-End; speech recognition; analysis;
D O I
10.21437/Interspeech.2021-1516
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
End-to-end automatic speech recognition (ASR) models aim to learn a generalised speech representation. However, there are limited tools available to understand the internal functions and the effect of hierarchical dependencies within the model architecture. It is crucial to understand the correlations between the layer-wise representations, to derive insights on the relationship between neural representations and performance. Previous investigations of network similarities using correlation analysis techniques have not been explored for End-to-End ASR models. This paper analyses and explores the internal dynamics between layers during training with CNN, LSTM and Transformer based approaches using Canonical correlation analysis (CCA) and centered kernel alignment (CKA) for the experiments. It was found that neural representations within CNN layers exhibit hierarchical correlation dependencies as layer depth increases but this is mostly limited to cases where neural representation correlates more closely. This behaviour is not observed in LSTM architecture, however there is a bottom-up pattern observed across the training process, while Transformer encoder layers exhibit irregular coefficiency correlation as neural depth increases. Altogether, these results provide new insights into the role that neural architectures have upon speech recognition performance. More specifically, these techniques can be used as indicators to build better performing speech recognition models.
引用
收藏
页码:4079 / 4083
页数:5
相关论文
共 27 条
[1]  
Amodei D, 2016, PR MACH LEARN RES, V48
[2]  
[Anonymous], 2018, ARXIV180400015, DOI DOI 10.21437/INTERSPEECH.2018-1456
[3]  
[Anonymous], 2018, ARXIV181100225
[4]  
Arpit D, 2017, PR MACH LEARN RES, V70
[5]  
Bahdanau D, 2016, Arxiv, DOI [arXiv:1409.0473, DOI 10.48550/ARXIV.1409.0473]
[6]  
Bandanau D, 2016, INT CONF ACOUST SPEE, P4945, DOI 10.1109/ICASSP.2016.7472618
[7]   Impact of fully connected layers on performance of convolutional neural networks for image classification [J].
Basha, S. H. Shabbeer ;
Dubey, Shiv Ram ;
Pulabaigari, Viswanath ;
Mukherjee, Snehasis .
NEUROCOMPUTING, 2020, 378 :112-119
[8]  
Chorowski J, 2015, ADV NEUR IN, V28
[9]  
Cristianini N, 2002, ADV NEUR IN, V14, P367
[10]  
Furui S, 2009, 2009 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION & UNDERSTANDING (ASRU 2009), P1, DOI [10.1109/PLASMA.2009.5227272, 10.1109/ASRU.2009.5373493]