Multimodal Embeddings From Language Models for Emotion Recognition in the Wild

被引:10
|
作者
Tseng, Shao-Yen [1 ]
Narayanan, Shrikanth [1 ]
Georgiou, Panayiotis [2 ]
机构
[1] Univ Southern Calif, Dept Elect & Comp Engn, Los Angeles, CA 90089 USA
[2] Apple Inc, Siri Understanding, Culver City, CA 90016 USA
关键词
Acoustics; Task analysis; Feature extraction; Convolution; Emotion recognition; Context modeling; Bit error rate; Machine learning; unsupervised learning; natural language processing; speech processing; emotion recognition; SPEECH;
D O I
10.1109/LSP.2021.3065598
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Word embeddings such as ELMo and BERT have been shown to model word usage in language with greater efficacy through contextualized learning on large-scale language corpora, resulting in significant performance improvement across many natural language processing tasks. In this work we integrate acoustic information into contextualized lexical embeddings through the addition of a parallel stream to the bidirectional language model. This multimodal language model is trained on spoken language data that includes both text and audio modalities. We show that embeddings extracted from this model integrate paralinguistic cues into word meanings and can provide vital affective information by applying these multimodal embeddings to the task of speaker emotion recognition.
引用
收藏
页码:608 / 612
页数:5
相关论文
共 50 条
  • [1] Emotion Recognition from Videos Using Multimodal Large Language Models
    Vaiani, Lorenzo
    Cagliero, Luca
    Garza, Paolo
    FUTURE INTERNET, 2024, 16 (07)
  • [2] Interactive Multimodal Attention Network for Emotion Recognition in Conversation
    Ren, Minjie
    Huang, Xiangdong
    Shi, Xiaoqi
    Nie, Weizhi
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 1046 - 1050
  • [3] Multimodal Emotion Recognition With Temporal and Semantic Consistency
    Chen, Bingzhi
    Cao, Qi
    Hou, Mixiao
    Zhang, Zheng
    Lu, Guangming
    Zhang, David
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2021, 29 : 3592 - 3603
  • [4] Multimodal Emotion Recognition From EEG Signals and Facial Expressions
    Wang, Shuai
    Qu, Jingzi
    Zhang, Yong
    Zhang, Yidie
    IEEE ACCESS, 2023, 11 : 33061 - 33068
  • [5] Emotion recognition models for companion robots
    Nimmagadda, Ritvik
    Arora, Kritika
    Martin, Miguel Vargas
    JOURNAL OF SUPERCOMPUTING, 2022, 78 (11) : 13710 - 13727
  • [6] RobinNet: A Multimodal Speech Emotion Recognition System With Speaker Recognition for Social Interactions
    Khurana, Yash
    Gupta, Swamita
    Sathyaraj, R.
    Raja, S. P.
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2022, 11 (01) : 478 - 487
  • [7] Multimodal Emotion Recognition Based on Facial Expressions, Speech, and EEG
    Pan, Jiahui
    Fang, Weijie
    Zhang, Zhihang
    Chen, Bingzhi
    Zhang, Zheng
    Wang, Shuihua
    IEEE OPEN JOURNAL OF ENGINEERING IN MEDICINE AND BIOLOGY, 2024, 5 : 396 - 403
  • [8] EmoBed: Strengthening Monomodal Emotion Recognition via Training with Crossmodal Emotion Embeddings
    Han, Jing
    Zhang, Zixing
    Ren, Zhao
    Schuller, Bjorn
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2021, 12 (03) : 553 - 564
  • [9] DEEP MULTIMODAL LEARNING FOR EMOTION RECOGNITION IN SPOKEN LANGUAGE
    Gu, Yue
    Chen, Shuhong
    Marsic, Ivan
    2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 5079 - 5083
  • [10] Multimodal Fusion based on Information Gain for Emotion Recognition in the Wild
    Ghaleb, Esam
    Popa, Mirela
    Hortal, Enrique
    Asteriadis, Stylianos
    PROCEEDINGS OF THE 2017 INTELLIGENT SYSTEMS CONFERENCE (INTELLISYS), 2017, : 814 - 823