Speaker Representation Learning via Contrastive Loss with Maximal Speaker Separability

被引:0
作者
Li, Zhe [1 ]
Mak, Man-Wai [1 ]
机构
[1] Hong Kong Polytech Univ, Hong Kong, Peoples R China
来源
PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC) | 2022年
关键词
RECOGNITION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
A great challenge in speaker representation learning using deep models is to design learning objectives that can enhance the discrimination of unseen speakers under unseen domains. This work proposes a supervised contrastive learning objective to learn a speaker embedding space by effectively leveraging the label information in the training data. In such a space, utterance pairs spoken by the same or similar speakers will stay close, while utterance pairs spoken by different speakers will be far apart. For each training speaker, we perform random data augmentation on their utterances to form positive pairs, and utterances from different speakers form negative pairs. To maximize speaker separability in the embedding space, we incorporate the additive angular-margin loss into the contrastive learning objective. Experimental results on CN-Celeb show that this new learning objective can cause ECAPA-TDNN to find an embedding space that exhibits great speaker discrimination. The contrastive learning objective is easy to implement, and we provide PyTorch code at https://github.com/shanmon110/AAMSupCon.
引用
收藏
页码:962 / 967
页数:6
相关论文
共 43 条
[31]  
Snyder D, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P5329
[32]   Deep Neural Network Embeddings for Text-Independent Speaker Verification [J].
Snyder, David ;
Garcia-Romero, Daniel ;
Povey, Daniel ;
Khudanpur, Sanjeev .
18TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2017), VOLS 1-6: SITUATED INTERACTION, 2017, :999-1003
[33]  
Sukhbaatar S., 2015, PROC 3 INT C LEARNIN
[34]  
Tian Y., 2019, P EUR C COMP VIS 202
[35]  
Tu YZ, 2020, INT CONF ACOUST SPEE, P6449, DOI [10.1109/ICASSP40776.2020.9053735, 10.1109/icassp40776.2020.9053735]
[36]   Variational Domain Adversarial Learning With Mutual Information Maximization for Speaker Verification [J].
Tu, Youzhi ;
Mak, Man-Wai ;
Chien, Jen-Tzung .
IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2020, 28 :2013-2024
[37]  
van den Oord A, 2019, Arxiv, DOI arXiv:1807.03748
[38]  
Wan L, 2018, 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), P4879, DOI 10.1109/ICASSP.2018.8462665
[39]   NormFace: L2 Hypersphere Embedding for Face Verification [J].
Wang, Feng ;
Xiang, Xiang ;
Cheng, Jian ;
Yuille, Alan L. .
PROCEEDINGS OF THE 2017 ACM MULTIMEDIA CONFERENCE (MM'17), 2017, :1041-1049
[40]   CosFace: Large Margin Cosine Loss for Deep Face Recognition [J].
Wang, Hao ;
Wang, Yitong ;
Zhou, Zheng ;
Ji, Xing ;
Gong, Dihong ;
Zhou, Jingchao ;
Li, Zhifeng ;
Liu, Wei .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5265-5274