SAPBERT: Speaker-Aware Pretrained BERT for Emotion Recognition in Conversation

被引:2
作者
Lim, Seunguook [1 ]
Kim, Jihie [1 ]
机构
[1] Dongguk Univ Seoul, Dept Artificial Intelligence, 30 Pildong Ro 1 Gil, Seoul 04620, South Korea
关键词
natural language processing; motion recognition in conversation; dialogue modeling; pre-training; hierarchical BERT;
D O I
10.3390/a16010008
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Emotion recognition in conversation (ERC) is receiving more and more attention, as interactions between humans and machines increase in a variety of services such as chat-bot and virtual assistants. As emotional expressions within a conversation can heavily depend on the contextual information of the participating speakers, it is important to capture self-dependency and inter-speaker dynamics. In this study, we propose a new pre-trained model, SAPBERT, that learns to identify speakers in a conversation to capture the speaker-dependent contexts and address the ERC task. SAPBERT is pre-trained with three training objectives including Speaker Classification (SC), Masked Utterance Regression (MUR), and Last Utterance Generation (LUG). We investigate whether our pre-trained speaker-aware model can be leveraged for capturing speaker-dependent contexts for ERC tasks. Experiments show that our proposed approach outperforms baseline models through demonstrating the effectiveness and validity of our method.
引用
收藏
页数:16
相关论文
empty
未找到相关数据