Combined Keyword Spotting and Localization Network Based on Multi-Task Learning

被引:0
作者
Ko, Jungbeom [1 ]
Kim, Hyunchul [2 ]
Kim, Jungsuk [3 ]
机构
[1] Gachon Univ, Gachon Adv Inst Hlth Sci & Technol GAIHST, Dept Hlth Sci & Technol, Incheon 21936, South Korea
[2] Univ Calif Berkeley, Sch Informat, 102 South Hall 4600, Berkeley, CA 94720 USA
[3] Gachon Univ, Coll IT Convergence, Dept Biomed Engn, Seongnam Si 13120, South Korea
基金
新加坡国家研究基金会;
关键词
deep neural network; keyword spotting; sound source localization; multi-task learning;
D O I
10.3390/math12213309
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The advent of voice assistance technology and its integration into smart devices has facilitated many useful services, such as texting and application execution. However, most assistive technologies lack the capability to enable the system to act as a human who can localize the speaker and selectively spot meaningful keywords. Because keyword spotting (KWS) and sound source localization (SSL) are essential and must operate in real time, the efficiency of a neural network model is crucial for memory and computation. In this paper, a single neural network model for KWS and SSL is proposed to overcome the limitations of sequential KWS and SSL, which require more memory and inference time. The proposed model uses multi-task learning to utilize the limited resources of the device efficiently. A shared encoder is used as the initial layer to extract common features from the multichannel audio data. Subsequently, the task-specific parallel layers utilize these features for KWS and SSL. The proposed model was evaluated on a synthetic dataset with multiple speakers, and a 7-module shared encoder structure was identified as optimal in terms of accuracy, direction of arrival (DOA) accuracy, DOA error, and latency. It achieved a KWS accuracy of 94.51%, DOA error of 12.397 degrees, and DOA accuracy of 89.86%. Consequently, the proposed model requires significantly less memory owing to the shared network architecture, which enhances the inference time without compromising KWS accuracy, DOA error, and DOA accuracy.
引用
收藏
页数:14
相关论文
共 30 条
[11]  
Kingma Diederik P, 2014, ARXIV PREPRINT ARXIV
[12]   Real-Time Sound Source Localization for Low-Power IoT Devices Based on Multi-Stream CNN [J].
Ko, Jungbeom ;
Kim, Hyunchul ;
Kim, Jungsuk .
SENSORS, 2022, 22 (12)
[13]   Time Delay Estimation for Sound Source Localization Using CNN-Based Multi-GCC Feature Fusion [J].
Liu, Haitao ;
Zhang, Xiuliang ;
Li, Penggao ;
Yao, Yu ;
Zhang, Sheng ;
Xiao, Qian .
IEEE ACCESS, 2023, 11 :140789-140800
[14]  
Lpez-Espejo I., 2021, Deep Spoken Keyword Spotting: An Overview
[15]   Towards End-to-End Acoustic Localization Using Deep Learning: From Audio Signals to Source Position Coordinates [J].
Manuel Vera-Diaz, Juan ;
Pizarro, Daniel ;
Macias-Guarasa, Javier .
SENSORS, 2018, 18 (10)
[16]  
Iandola FN, 2016, Arxiv, DOI [arXiv:1602.07360, 10.48550/arXiv.1602.07360]
[17]   MULTIPLE EMITTER LOCATION AND SIGNAL PARAMETER-ESTIMATION [J].
SCHMIDT, RO .
IEEE TRANSACTIONS ON ANTENNAS AND PROPAGATION, 1986, 34 (03) :276-280
[18]   Wav2KWS: Transfer Learning From Speech Representations for Keyword Spotting [J].
Seo, Deokjin ;
Oh, Heung-Seon ;
Jung, Yuchul .
IEEE ACCESS, 2021, 9 :80682-80691
[19]  
Shan CH, 2018, Arxiv, DOI arXiv:1803.10916
[20]  
Sundar H, 2020, INT CONF ACOUST SPEE, P4642, DOI [10.1109/ICASSP40776.2020.9054090, 10.1109/icassp40776.2020.9054090]