SEF-Net: Speaker Embedding Free Target Speaker Extraction Network

被引:6
作者
Zeng, Bang [1 ,2 ]
Suo, Hongbin [3 ]
Wan, Yulong [3 ]
Li, Ming [1 ,2 ]
机构
[1] Wuhan Univ, Sch Comp Sci, Wuhan, Peoples R China
[2] Duke Kunshan Univ, Data Sci Res Ctr, Kunshan, Peoples R China
[3] OPPO, Data&AI Engn Syst, Beijing, Peoples R China
来源
INTERSPEECH 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Target speaker extraction; speaker embedding free; dual-path; conformer; SEPARATION;
D O I
10.21437/Interspeech.2023-1749
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Most target speaker extraction methods use the target speaker embedding as reference information. However, the speaker embedding extracted by a speaker recognition module may not be optimal for the target speaker extraction tasks. In this paper, we proposes Speaker Embedding Free target speaker extraction Network (SEF-Net), a novel target speaker extraction model without relying on speaker embedding. SEF-Net uses cross multi-head attention in the transformer decoder to implicitly utilize the speaker information in the reference speech's conformer encoding outputs. Experimental results show that our proposed model achieves comparable performance to other target speaker extraction models. SEF-Net provides a feasible new solution to perform target speaker extraction without using a speaker embedding extractor or speaker recognition loss function.
引用
收藏
页码:3452 / 3456
页数:5
相关论文
共 33 条
[21]   ATTENTION IS ALL YOU NEED IN SPEECH SEPARATION [J].
Subakan, Cem ;
Ravanelli, Mirco ;
Cornell, Samuele ;
Bronzi, Mirko ;
Zhong, Jianyuan .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :21-25
[22]  
van der Maaten L, 2008, J MACH LEARN RES, V9, P2579
[23]  
Vaswani A, 2017, ADV NEUR IN, V30
[24]  
Wang Q., 2018, INTERSPEECH
[25]  
Wang W., 2021, INTERSPEECH
[26]  
Wang ZQ, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P686, DOI 10.1109/ICASSP.2018.8462507
[27]   SpEx: Multi-Scale Time Domain Speaker Extraction Network [J].
Xu, Chenglin ;
Rao, Wei ;
Chng, Eng Siong ;
Li, Haizhou .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 :1370-1384
[28]  
Xu CL, 2019, 2019 IEEE AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING WORKSHOP (ASRU 2019), P327, DOI [10.1109/ASRU46091.2019.9004016, 10.1109/asru46091.2019.9004016]
[29]  
Xu CL, 2019, INT CONF ACOUST SPEE, P6990, DOI [10.1109/ICASSP.2019.8683874, 10.1109/icassp.2019.8683874]
[30]   Permutation invariant training of deep models for speaker-independent multi-talker speech separation [J].
Takahashi, Kohei ;
Shiraishi, Toshihiko .
MECHANICAL ENGINEERING JOURNAL, 2023,