MULTI-VIEW SELF-ATTENTION BASED TRANSFORMER FOR SPEAKER RECOGNITION

被引:26
作者
Wang, Rui [1 ,4 ]
Ao, Junyi [2 ,3 ,4 ]
Zhou, Long [4 ]
Liu, Shujie [4 ]
Wei, Zhihua [1 ]
Ko, Tom [2 ]
Li, Qing [3 ]
Zhang, Yu [2 ]
机构
[1] Tongji Univ, Dept Comp Sci & Technol, Shanghai, Peoples R China
[2] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen, Guangdong, Peoples R China
[3] Hong Kong Polytech Univ, Dept Comp, Hong Kong, Peoples R China
[4] Microsoft Res Asia, Beijing, Peoples R China
来源
2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP) | 2022年
关键词
speaker recognition; Transformer; speaker identification; speaker verification;
D O I
10.1109/ICASSP43922.2022.9746639
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Initially developed for natural language processing (NLP), Transformer model is now widely used for speech processing tasks such as speaker recognition, due to its powerful sequence modeling capabilities. However, conventional self-attention mechanisms are originally designed for modeling textual sequence without considering the characteristics of speech and speaker modeling. Besides, different Transformer variants for speaker recognition have not been well studied. In this work, we propose a novel multi-view self-attention mechanism and present an empirical study of different Transformer variants with or without the proposed attention mechanism for speaker recognition. Specifically, to balance the capabilities of capturing global dependencies and modeling the locality, we propose a multi-view self-attention mechanism for speaker Transformer, in which different attention heads can attend to different ranges of the receptive field. Furthermore, we introduce and compare five Transformer variants with different network architectures, embedding locations, and pooling methods to learn speaker embeddings. Experimental results on the VoxCeleb1 and VoxCeleb2 datasets show that the proposed multi-view self-attention mechanism achieves improvement in the performance of speaker recognition, and the proposed speaker Transformer network attains excellent results compared with state-of-the-art models.
引用
收藏
页码:6732 / 6736
页数:5
相关论文
共 50 条
[31]   Class token and knowledge distillation for multi-head self-attention speaker verification systems [J].
Mingote, Victoria ;
Miguel, Antonio ;
Ortega, Alfonso ;
Lleida, Eduardo .
DIGITAL SIGNAL PROCESSING, 2023, 133
[32]   Self-Attention Networks for Text-Independent Speaker Verification [J].
Bian, Tengyue ;
Chen, Fangzhou ;
Xu, Li .
PROCEEDINGS OF THE 2019 31ST CHINESE CONTROL AND DECISION CONFERENCE (CCDC 2019), 2019, :3955-3960
[33]   A Multi-Head Self-Attention Transformer-Based Model for Traffic Situation Prediction in Terminal Areas [J].
Yu, Zhou ;
Shi, Xingyu ;
Zhang, Zhaoning .
IEEE ACCESS, 2023, 11 :16156-16165
[34]   An efficient parallel self-attention transformer for CSI feedback [J].
Liu, Ziang ;
Song, Tianyu ;
Zhao, Ruohan ;
Jin, Jiyu ;
Jin, Guiyue .
PHYSICAL COMMUNICATION, 2024, 66
[35]   SMSTracker: A Self-Calibration Multi-Head Self-Attention Transformer for Visual Object Tracking [J].
Wang, Zhongyang ;
Zhu, Hu ;
Liu, Feng .
CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 80 (01) :605-623
[36]   Refined Self-Attention Transformer Model for ECG-Based Arrhythmia Detection [J].
Tao, Yanyun ;
Xu, Biao ;
Zhang, Yuzhen .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2024, 73 :1-14
[37]   Local-Global Self-Attention for Transformer-Based Object Tracking [J].
Chen, Langkun ;
Gao, Long ;
Jiang, Yan ;
Li, Yunsong ;
He, Gang ;
Ning, Jifeng .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (12) :12316-12329
[38]   Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration [J].
Lee, Eunho ;
Hwang, Youngbae .
IEEE ACCESS, 2024, 12 :38672-38684
[39]   RSAFormer: A method of polyp segmentation with region self-attention transformer [J].
Yin X. ;
Zeng J. ;
Hou T. ;
Tang C. ;
Gan C. ;
Jain D.K. ;
García S. .
Computers in Biology and Medicine, 2024, 172
[40]   STDP-Net: Improved Pedestrian Attribute Recognition Using Swin Transformer and Semantic Self-Attention [J].
Lee, Geonu ;
Cho, Jungchan .
IEEE ACCESS, 2022, 10 :82656-82667