Broadcasted Residual Learning for Efficient Keyword Spotting

被引:51
作者
Kim, Byeonggeun [1 ]
Chang, Simyung [1 ]
Lee, Jinkyu [1 ]
Sung, Dooyong [1 ]
机构
[1] Qualcomm Korea YH, Qualcomm AI Res, Seoul, South Korea
来源
INTERSPEECH 2021 | 2021年
关键词
keyword spotting; speech command recognition; deep neural network; efficient neural network; residual learning;
D O I
10.21437/Interspeech.2021-383
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Keyword spotting is an important research field because it plays a key role in device wake-up and user interaction on smart devices. However, it is challenging to minimize errors while operating efficiently in devices with limited resources such as mobile phones. We present a broadcasted residual learning method to achieve high accuracy with small model size and computational load. Our method configures most of the residual functions as 1D temporal convolution while still allows 2D convolution together using a broadcasted-residual connection that expands temporal output to frequency-temporal dimension. This residual mapping enables the network to effectively represent useful audio features with much less computation than conventional convolutional neural networks. We also propose a novel network architecture, Broadcasting-residual network (BC-ResNet), based on broadcasted residual learning and describe how to scale up the model according to the target device's resources. BC-ResNets achieve state-of-the-art 98:0% and 98:7% top-1 accuracy on Google speech command datasets v1 and v2, respectively, and consistently outperform previous approaches, using fewer computations and parameters.
引用
收藏
页码:4538 / 4542
页数:5
相关论文
共 29 条
[1]  
[Anonymous], 2018, A neural attention model for speech command recognition
[2]  
Chang S., 2021, ARXIV210313620
[3]   Broadcasting Convolutional Network for Visual Relational Reasoning [J].
Chang, Simyung ;
Yang, John ;
Park, SeongUk ;
Kwak, Nojun .
COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 :780-796
[4]   Temporal Convolution for Real-time Keyword Spotting on Mobile Devices [J].
Choi, Seungwoo ;
Seo, Seokjun ;
Shin, Beomjun ;
Byun, Hyeongmin ;
Kersner, Martin ;
Kim, Beomsu ;
Kim, Dongyoung ;
Ha, Sungjoo .
INTERSPEECH 2019, 2019, :3372-3376
[5]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[6]  
Goyal Priya, 2017, CORR
[7]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[8]  
Howard A. G., 2017, ARXIV
[9]  
Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/CVPR.2018.00745, 10.1109/TPAMI.2019.2913372]
[10]  
Ioffe S., 2015, PMLR, P448, DOI DOI 10.48550/ARXIV.1502.03167