A multi-view network for real-time emotion recognition in conversations

被引:47
作者
Ma, Hui [1 ]
Wang, Jian [1 ]
Lin, Hongfei [1 ]
Pan, Xuejun [2 ]
Zhang, Yijia [1 ]
Yang, Zhihao [1 ]
机构
[1] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian, Peoples R China
[2] Dalian Univ Technol, Sch Control Sci & Engn, Dalian, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Emotion recognition; Real-time conversations; Multi-view learning; Word-level dependencies; Utterance-level dependencies; MODEL;
D O I
10.1016/j.knosys.2021.107751
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Real-time emotion recognition in conversations (RTERC), the task of using the historical context to identify the emotion of a query utterance in a conversation, is important for opinion mining and building empathetic machines. Existing works mainly focus on obtaining each utterance representation separately and then utilizing utterance-level features to model the emotion representation of the query. These approaches treat each utterance as a unit and capture the utterance-level dependencies in the context, but ignore the word-level dependencies among different utterances. In this paper, we propose a multi-view network (MVN) to explore the emotion representation of a query from two different views, i.e., word- and utterance-level views. For the word-level view, MVN takes the context and query as word sequences and then models the word-level dependencies among utterances. For the utterance-level view, MVN extracts each utterance representation separately and then models the utterance-level dependencies in the context. Experimental results on two public emotion conversation datasets show that the proposed model outperforms the state-of-the-art baselines. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 50 条
[1]   EmoNet: Fine-Grained Emotion Detection with Gated Recurrent Neural Networks [J].
Abdul-Mageed, Muhammad ;
Ungar, Lyle .
PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, :718-728
[2]  
[Anonymous], 2004, P 2004 C EMP METH NA
[3]  
Ba J. L., 2016, arXiv, DOI 10.48550/arXiv:1607.06450
[4]  
Baccianella S, 2010, LREC 2010 - SEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION
[5]   Meta-level sentiment models for big social data analysis [J].
Bravo-Marquez, Felipe ;
Mendoza, Marcelo ;
Poblete, Barbara .
KNOWLEDGE-BASED SYSTEMS, 2014, 69 :86-99
[6]   IEMOCAP: interactive emotional dyadic motion capture database [J].
Busso, Carlos ;
Bulut, Murtaza ;
Lee, Chi-Chun ;
Kazemzadeh, Abe ;
Mower, Emily ;
Kim, Samuel ;
Chang, Jeannette N. ;
Lee, Sungbok ;
Narayanan, Shrikanth S. .
LANGUAGE RESOURCES AND EVALUATION, 2008, 42 (04) :335-359
[7]   SenticNet 6: Ensemble Application of Symbolic and Subsymbolic AI for Sentiment Analysis [J].
Cambria, Erik ;
Li, Yang ;
Xing, Frank Z. ;
Poria, Soujanya ;
Kwok, Kenneth .
CIKM '20: PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, 2020, :105-114
[8]   Affective Computing and Sentiment Analysis [J].
Cambria, Erik .
IEEE INTELLIGENT SYSTEMS, 2016, 31 (02) :102-107
[9]  
Cho K., 2014, COMPUT SCI