Combined Keyword Spotting and Localization Network Based on Multi-Task Learning

被引:0
作者
Ko, Jungbeom [1 ]
Kim, Hyunchul [2 ]
Kim, Jungsuk [3 ]
机构
[1] Gachon Univ, Gachon Adv Inst Hlth Sci & Technol GAIHST, Dept Hlth Sci & Technol, Incheon 21936, South Korea
[2] Univ Calif Berkeley, Sch Informat, 102 South Hall 4600, Berkeley, CA 94720 USA
[3] Gachon Univ, Coll IT Convergence, Dept Biomed Engn, Seongnam Si 13120, South Korea
基金
新加坡国家研究基金会;
关键词
deep neural network; keyword spotting; sound source localization; multi-task learning;
D O I
10.3390/math12213309
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
The advent of voice assistance technology and its integration into smart devices has facilitated many useful services, such as texting and application execution. However, most assistive technologies lack the capability to enable the system to act as a human who can localize the speaker and selectively spot meaningful keywords. Because keyword spotting (KWS) and sound source localization (SSL) are essential and must operate in real time, the efficiency of a neural network model is crucial for memory and computation. In this paper, a single neural network model for KWS and SSL is proposed to overcome the limitations of sequential KWS and SSL, which require more memory and inference time. The proposed model uses multi-task learning to utilize the limited resources of the device efficiently. A shared encoder is used as the initial layer to extract common features from the multichannel audio data. Subsequently, the task-specific parallel layers utilize these features for KWS and SSL. The proposed model was evaluated on a synthetic dataset with multiple speakers, and a 7-module shared encoder structure was identified as optimal in terms of accuracy, direction of arrival (DOA) accuracy, DOA error, and latency. It achieved a KWS accuracy of 94.51%, DOA error of 12.397 degrees, and DOA accuracy of 89.86%. Consequently, the proposed model requires significantly less memory owing to the shared network architecture, which enhances the inference time without compromising KWS accuracy, DOA error, and DOA accuracy.
引用
收藏
页数:14
相关论文
共 30 条
[1]   The effects of distance and reverberation time on speaker recognition performance [J].
Al-Karawi K.A. ;
Al-Bayati B. .
International Journal of Information Technology, 2024, 16 (5) :3065-3071
[2]   IMAGE METHOD FOR EFFICIENTLY SIMULATING SMALL-ROOM ACOUSTICS [J].
ALLEN, JB ;
BERKLEY, DA .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 1979, 65 (04) :943-950
[3]   Temporal Convolution for Real-time Keyword Spotting on Mobile Devices [J].
Choi, Seungwoo ;
Seo, Seokjun ;
Shin, Beomjun ;
Byun, Hyeongmin ;
Kersner, Martin ;
Kim, Beomsu ;
Kim, Dongyoung ;
Ha, Sungjoo .
INTERSPEECH 2019, 2019, :3372-3376
[4]   Mechanical Fault Sound Source Localization Estimation in a Multisource Strong Reverberation Environment [J].
Deng, Yaohua ;
Liu, Xiali ;
Zhang, Zilin ;
Zeng, Daolong .
SHOCK AND VIBRATION, 2024, 2024
[5]  
DiBiase J. H., 2000, A high -accuracy, low -latency technique for talker localization reverberant environments using microphone arrays
[6]   A survey of sound source localization with deep learning methods [J].
Grumiaux, Pierre-Amaury ;
Kitic, Srdan ;
Girin, Laurent ;
Guerin, Alexandre .
JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2022, 152 (01) :107-151
[7]  
Ioffe S, 2015, PR MACH LEARN RES, V37, P448
[8]   Multi-Task Network for Noise-Robust Keyword Spotting and Speaker Verification using CTC-based Soft VAD and Global Query Attention [J].
Jung, Myunghun ;
Jung, Youngmoon ;
Goo, Jahyun ;
Kim, Hoirin .
INTERSPEECH 2020, 2020, :931-935
[9]   Broadcasted Residual Learning for Efficient Keyword Spotting [J].
Kim, Byeonggeun ;
Chang, Simyung ;
Lee, Jinkyu ;
Sung, Dooyong .
INTERSPEECH 2021, 2021, :4538-4542
[10]   Comparison and Analysis of SampleCNN Architectures for Audio Classification [J].
Kim, Taejun ;
Lee, Jongpil ;
Nam, Juhan .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2019, 13 (02) :285-297