CAM plus plus : A Fast and Efficient Network for Speaker Verification Using Context-Aware Masking

被引:7
作者
Wang, Hui [1 ]
Zheng, Siqi [1 ]
Chen, Yafeng [1 ]
Cheng, Luyao [1 ]
Chen, Qian [1 ]
机构
[1] Alibaba Grp, Speech Lab, Hangzhou, Peoples R China
来源
INTERSPEECH 2023 | 2023年
关键词
speaker verification; densely connected time delay neural network; context-aware masking; computational complexity;
D O I
10.21437/Interspeech.2023-1513
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Time delay neural network (TDNN) has been proven to be efficient for speaker verification. One of its successful variants, ECAPA-TDNN, achieved state-of-the-art performance at the cost of much higher computational complexity and slower inference speed. This makes it inadequate for scenarios with demanding inference rate and limited computational resources. We are thus interested in finding an architecture that can achieve the performance of ECAPA-TDNN and the efficiency of vanilla TDNN. In this paper, we propose an efficient network based on context-aware masking, namely CAM++, which uses densely connected time delay neural network (D-TDNN) as backbone and adopts a novel multi-granularity pooling to capture contextual information at different levels. Extensive experiments on two public benchmarks, VoxCeleb and CN-Celeb, demonstrate that the proposed architecture outperforms other mainstream speaker verification systems with lower computational cost and faster inference speed.
引用
收藏
页码:5301 / 5305
页数:5
相关论文
共 25 条
  • [1] Speaker recognition based on deep learning: An overview
    Bai, Zhongxin
    Zhang, Xiao-Lei
    [J]. NEURAL NETWORKS, 2021, 140 : 65 - 99
  • [2] Chen Z., 2022, ARXIV
  • [3] ArcFace: Additive Angular Margin Loss for Deep Face Recognition
    Deng, Jiankang
    Guo, Jia
    Xue, Niannan
    Zafeiriou, Stefanos
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4685 - 4694
  • [4] ECAPA-TDNN: Emphasized Channel Attention, Propagation and Aggregation in TDNN Based Speaker Verification
    Desplanques, Brecht
    Thienpondt, Jenthe
    Demuynck, Kris
    [J]. INTERSPEECH 2020, 2020, : 3830 - 3834
  • [5] Fan Y, 2020, INT CONF ACOUST SPEE, P7604, DOI [10.1109/icassp40776.2020.9054017, 10.1109/ICASSP40776.2020.9054017]
  • [6] HE KM, 2016, PROC CVPR IEEE, P770, DOI DOI 10.1109/CVPR.2016.90
  • [7] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
  • [8] Huang G., 2017, P IEEE C COMP VIS PA, P4700, DOI [DOI 10.1109/CVPR.2017.243, 10.1109/CVPR.2017.243]
  • [9] Self Multi-Head Attention for Speaker Recognition
    India, Miquel
    Safari, Pooyan
    Hernando, Javier
    [J]. INTERSPEECH 2019, 2019, : 4305 - 4309
  • [10] Ko T, 2017, INT CONF ACOUST SPEE, P5220, DOI 10.1109/ICASSP.2017.7953152