Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition

被引:23
作者
Chen, Weidong [1 ]
Xing, Xiaofen [1 ]
Chen, Peihao [2 ]
Xu, Xiangmin [3 ,4 ]
机构
[1] South China Univ Technol, Sch Elect & Informat Engn, Guangzhou 510640, Peoples R China
[2] South China Univ Technol, Sch Software Engn, Guangzhou 510640, Peoples R China
[3] South China Univ Technol, Sch Future Technol, Guangzhou 511442, Peoples R China
[4] Pazhou Lab, Guangzhou 510330, Peoples R China
基金
国家重点研发计划;
关键词
Training; Emotion recognition; Adaptation models; Cross layer design; Computational modeling; Semantics; Speech recognition; Pretrained model; speech emotion recognition; self-supervised learning; representation learning; FRAMEWORK; NETWORK; ENHANCEMENT;
D O I
10.1109/TAFFC.2024.3369726
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This article presents a paradigm that adapts general large-scale pretrained models (PTMs) to speech emotion recognition task. Although PTMs shed new light on artificial general intelligence, they are constructed with general tasks in mind, and thus, their efficacy for specific tasks can be further improved. Additionally, employing PTMs in practical applications can be challenging due to their considerable size. Above limitations spawn another research direction, namely, optimizing large-scale PTMs for specific tasks to generate task-specific PTMs that are both compact and effective. In this paper, we focus on the speech emotion recognition task and propose an improVed emotion-specific pretrained encoder called Vesper. Vesper is pretrained on a speech dataset based on WavLM and takes into account emotional characteristics. To enhance sensitivity to emotional information, Vesper employs an emotion-guided masking strategy to identify the regions that need masking. Subsequently, Vesper employs hierarchical and cross-layer self-supervision to improve its ability to capture acoustic and semantic representations, both of which are crucial for emotion recognition. Experimental results on the IEMOCAP, MELD, and CREMA-D datasets demonstrate that Vesper with 4 layers outperforms WavLM Base with 12 layers, and the performance of Vesper with 12 layers surpasses that of WavLM Large with 24 layers.
引用
收藏
页码:1711 / 1724
页数:14
相关论文
共 74 条
[1]  
Alayrac JB, 2022, ADV NEUR IN
[2]   Survey on bimodal speech emotion recognition from acoustic and linguistic information fusion [J].
Atmaja, Bagus Tris ;
Sasou, Akira ;
Akagi, Masato .
SPEECH COMMUNICATION, 2022, 140 :11-28
[3]  
Baevski A, 2020, ADV NEUR IN, V33
[4]  
Beal J., 2020, arXiv
[5]  
Brown TB, 2020, ADV NEUR IN, V33
[6]   IEMOCAP: interactive emotional dyadic motion capture database [J].
Busso, Carlos ;
Bulut, Murtaza ;
Lee, Chi-Chun ;
Kazemzadeh, Abe ;
Mower, Emily ;
Kim, Samuel ;
Chang, Jeannette N. ;
Lee, Sungbok ;
Narayanan, Shrikanth S. .
LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) :335-359
[7]   CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset [J].
Cao, Houwei ;
Cooper, David G. ;
Keutmann, Michael K. ;
Gur, Ruben C. ;
Nenkova, Ani ;
Verma, Ragini .
IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (04) :377-390
[8]  
Cao QQ, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4487
[9]  
Chen DY, 2020, PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2463
[10]  
Chen GH, 2021, 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), P15, DOI 10.1109/ICESIT53460.2021.9697054