Generative Data Augmentation Guided by Triplet Loss for Speech Emotion Recognition

被引:1
作者
Wang, Shijun [1 ]
Hemati, Hamed [1 ]
Gudnason, Jon [2 ]
Borth, Damian [1 ]
机构
[1] Univ St Gallen, St Gallen, Switzerland
[2] Reykjavik Univ, Reykjavik, Iceland
来源
INTERSPEECH 2022 | 2022年
关键词
speech emotion recognition; speech augmentation; cross lingual; ADVERSARIAL NETWORKS; STARGAN;
D O I
10.21437/Interspeech.2022-10667
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Speech Emotion Recognition (SER) is crucial for humancomputer interaction but still remains a challenging problem because of two major obstacles: data scarcity and imbalance. Many datasets for SER are substantially imbalanced, where data utterances of one class (most often Neutral) are much more frequent than those of other classes. Furthermore, only a few data resources are available for many existing spoken languages. To address these problems, we exploit a GAN-based augmentation model guided by a triplet network, to improve SER performance given imbalanced and insufficient training data. We conduct experiments and demonstrate: 1) With a highly imbalanced dataset, our augmentation strategy significantly improves the SER performance (+8% recall score compared with the baseline). 2) Moreover, in a cross-lingual benchmark, where we train a model with enough source language utterances but very few target language utterances (around 50 in our experiments), our augmentation strategy brings benefits for the SER performance of all three target languages.
引用
收藏
页码:391 / 395
页数:5
相关论文
共 27 条
[1]  
[Anonymous], 2018, ARXIV
[2]  
[Anonymous], 2005, P INTERSPEECH
[3]  
[Anonymous], 2020, Excelra COVID-19 Drug Repurposing Database
[4]  
[Anonymous], arXiv
[5]   CycleGAN-based Emotion Style Transfer as Data Augmentation for Speech Emotion Recognition [J].
Bao, Fang ;
Neumann, Michael ;
Ngoc Thang Vu .
INTERSPEECH 2019, 2019, :2828-2832
[6]   IEMOCAP: interactive emotional dyadic motion capture database [J].
Busso, Carlos ;
Bulut, Murtaza ;
Lee, Chi-Chun ;
Kazemzadeh, Abe ;
Mower, Emily ;
Kim, Samuel ;
Chang, Jeannette N. ;
Lee, Sungbok ;
Narayanan, Shrikanth S. .
LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) :335-359
[7]   Data Augmentation using GANs for Speech Emotion Recognition [J].
Chatziagapi, Aggelina ;
Paraskevopoulos, Georgios ;
Sgouropoulos, Dimitris ;
Pantazopoulos, Georgios ;
Nikandrou, Malvina ;
Giannakopoulos, Theodoros ;
Katsamanis, Athanasios ;
Potamianos, Alexandros ;
Narayanan, Shrikanth .
INTERSPEECH 2019, 2019, :171-175
[8]   Self-Supervised GANs via Auxiliary Rotation Loss [J].
Chen, Ting ;
Zhai, Xiaohua ;
Ritter, Marvin ;
Lucic, Mario ;
Houlsby, Neil .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :12146-12155
[9]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[10]  
Costantini G, 2014, LREC 2014 - NINTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, P3501