PEFT-SER: On the Use of Parameter Efficient Transfer Learning Approaches For Speech Emotion Recognition Using Pre-trained Speech Models

被引:0
作者
Feng, Tiantian [1 ]
Narayanan, Shrikanth [2 ]
机构
[1] Univ Southern Calif, Dept Comp Sci, Los Angeles, CA 90007 USA
[2] Univ Southern Calif, Dept Elect & Comp Engn, Los Angeles, CA USA
来源
2023 11TH INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION, ACII | 2023年
关键词
Speech; emotion recognition; parameter-efficient fine-tuning; pre-trained model; CORPUS;
D O I
10.1109/ACIIW59127.2023.10388152
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many recent studies have focused on fine-tuning pre-trained models for speech emotion recognition (SER), resulting in promising performance compared to traditional methods that rely largely on low-level, knowledge-inspired acoustic features. These pre-trained speech models learn general-purpose speech representations using self-supervised or weakly-supervised learning objectives from large-scale datasets. Despite the significant advances made in SER through the use of pre-trained architecture, fine-tuning these large pre-trained models for different datasets requires saving copies of entire weight parameters, rendering them impractical to deploy in real-world settings. As an alternative, this work explores parameter-efficient fine-tuning (PEFT) approaches for adapting pre-trained speech models for emotion recognition. Specifically, we evaluate the efficacy of adapter tuning, embedding prompt tuning, and LoRa (Low-rank approximation) on four popular SER testbeds. Our results reveal that LoRa achieves the best fine-tuning performance in emotion recognition while enhancing fairness and requiring only a minimal extra amount of weight parameters. Furthermore, our findings offer novel insights into future research directions in SER, distinct from existing approaches focusing on directly fine-tuning the model architecture. Our code is publicly available under: https://github.com/usc-sail/peft-ser.
引用
收藏
页数:8
相关论文
共 36 条
  • [1] Baevski A., 2020, Advances in Neural Information Processing Systems
  • [2] Bommasani R., 2021, PREPRINT, DOI [DOI 10.48550/ARXIV.2108.07258, 10.48550/arXiv.2108.07258]
  • [3] Busso C., 2004, P 6 INT C MULT INT I, P205, DOI [10.1145/1027933.1027968, DOI 10.1145/1027933.1027968]
  • [4] MSP-IMPROV: An Acted Corpus of Dyadic Interactions to Study Emotion Perception
    Busso, Carlos
    Parthasarathy, Srinivas
    Burmania, Alec
    AbdelWahab, Mohammed
    Sadoughi, Najmeh
    Provost, Emily Mower
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2017, 8 (01) : 67 - 80
  • [5] IEMOCAP: interactive emotional dyadic motion capture database
    Busso, Carlos
    Bulut, Murtaza
    Lee, Chi-Chun
    Kazemzadeh, Abe
    Mower, Emily
    Kim, Samuel
    Chang, Jeannette N.
    Lee, Sungbok
    Narayanan, Shrikanth S.
    [J]. LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) : 335 - 359
  • [6] CREMA-D: Crowd-Sourced Emotional Multimodal Actors Dataset
    Cao, Houwei
    Cooper, David G.
    Keutmann, Michael K.
    Gur, Ruben C.
    Nenkova, Ani
    Verma, Ragini
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2014, 5 (04) : 377 - 390
  • [7] Chen LW, 2023, Arxiv, DOI arXiv:2110.06309
  • [8] WavLM: Large-Scale Self-Supervised Pre-Training for Full Stack Speech Processing
    Chen, Sanyuan
    Wang, Chengyi
    Chen, Zhengyang
    Wu, Yu
    Liu, Shujie
    Chen, Zhuo
    Li, Jinyu
    Kanda, Naoyuki
    Yoshioka, Takuya
    Xiao, Xiong
    Wu, Jian
    Zhou, Long
    Ren, Shuo
    Qian, Yanmin
    Qian, Yao
    Zeng, Michael
    Yu, Xiangzhan
    Wei, Furu
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2022, 16 (06) : 1505 - 1518
  • [9] Chien W.-S., 2023, ICASSP 2023 2023 IEE, P1
  • [10] The Geneva Minimalistic Acoustic Parameter Set (GeMAPS) for Voice Research and Affective Computing
    Eyben, Florian
    Scherer, Klaus R.
    Schuller, Bjoern W.
    Sundberg, Johan
    Andre, Elisabeth
    Busso, Carlos
    Devillers, Laurence Y.
    Epps, Julien
    Laukka, Petri
    Narayanan, Shrikanth S.
    Truong, Khiet P.
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2016, 7 (02) : 190 - 202