Multimodal and Multi-view Models for Emotion Recognition

被引:0
作者
Aguilar, Gustavo [1 ]
Rozgic, Viktor [2 ]
Wang, Weiran [2 ]
Wang, Chao [2 ]
机构
[1] Univ Houston, Houston, TX 77004 USA
[2] Amazon Com, Seattle, WA 98108 USA
来源
57TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2019) | 2019年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Studies on emotion recognition (ER) show that combining lexical and acoustic information results in more robust and accurate models. The majority of the studies focus on settings where both modalities are available in training and evaluation. However, in practice, this is not always the case; getting ASR output may represent a bottleneck in a deployment pipeline due to computational complexity or privacy-related constraints. To address this challenge, we study the problem of efficiently combining acoustic and lexical modalities during training while still providing a deployable acoustic model that does not require lexical inputs. We first experiment with multimodal models and two attention mechanisms to assess the extent of the benefits that lexical information can provide. Then, we frame the task as a multi-view learning problem to induce semantic information from a multimodal model into our acoustic-only network using a contrastive loss function. Our multimodal model outperforms the previous state of the art on the USC-IEMOCAP dataset reported on lexical and acoustic information. Additionally, our multi-view-trained acoustic network significantly surpasses models that have been exclusively trained with acoustic features.
引用
收藏
页码:991 / 1002
页数:12
相关论文
共 30 条
  • [1] Aldeneh Z., 2017, P 19 ACM INT C MULTI, V2017, P68, DOI DOI 10.1145/3136755
  • [2] Andrienko G., 2013, Introduction, P1
  • [3] [Anonymous], ABS170707250 CORR
  • [4] [Anonymous], 2018, MULTIMODAL SENTIMENT
  • [5] [Anonymous], ABS180307427 CORR
  • [6] [Anonymous], INTERPRETING AMBIGUO
  • [7] [Anonymous], 2013, ABS13045634 CORR
  • [8] [Anonymous], 2016, DEEP LEARNING
  • [9] [Anonymous], CORR
  • [10] [Anonymous], 2012, ARXIV E PRINTS