Speech Emotion Recognition Using Cascaded Attention Network with Joint Loss for Discrimination of Confusions

被引:0
作者
Yang Liu
Haoqin Sun
Wenbo Guan
Yuqi Xia
Zhen Zhao
机构
[1] Qingdao University of Science and Technology,School of Information Science and Technology
来源
Machine Intelligence Research | 2023年 / 20卷
关键词
Speech emotion recognition (SER); 3-dimensional (3D) feature; cascaded attention network (CAN); triplet loss; joint loss;
D O I
暂无
中图分类号
学科分类号
摘要
Due to the complexity of emotional expression, recognizing emotions from the speech is a critical and challenging task. In most of the studies, some specific emotions are easily classified incorrectly. In this paper, we propose a new framework that integrates cascade attention mechanism and joint loss for speech emotion recognition (SER), aiming to solve feature confusions for emotions that are difficult to be classified correctly. First, we extract the mel frequency cepstrum coefficients (MFCCs), deltas, and delta-deltas from MFCCs to form 3-dimensional (3D) features, thus effectively reducing the interference of external factors. Second, we employ spatiotemporal attention to selectively discover target emotion regions from the input features, where self-attention with head fusion captures the long-range dependency of temporal features. Finally, the joint loss function is employed to distinguish emotional embeddings with high similarity to enhance the overall performance. Experiments on interactive emotional dyadic motion capture (IEMOCAP) database indicate that the method achieves a positive improvement of 2.49% and 1.13% in weighted accuracy (WA) and unweighted accuracy (UA), respectively, compared to the state-of-the-art strategies.
引用
收藏
页码:595 / 604
页数:9
相关论文
empty
未找到相关数据