Transformer models for text-based emotion detection: a review of BERT-based approaches

被引:0
作者
Francisca Adoma Acheampong
Henry Nunoo-Mensah
Wenyu Chen
机构
[1] University of Electronic Science and Technology of China,Computational Intelligence Lab, School of Computer Science and Technology
[2] Kwame Nkrumah University of Science and Technology,Connected Devices Lab, Department of Computer Engineering
来源
Artificial Intelligence Review | 2021年 / 54卷
关键词
Natural language processing; Sentiment analysis; Text-based emotion detection; Transformers;
D O I
暂无
中图分类号
学科分类号
摘要
We cannot overemphasize the essence of contextual information in most natural language processing (NLP) applications. The extraction of context yields significant improvements in many NLP tasks, including emotion recognition from texts. The paper discusses transformer-based models for NLP tasks. It highlights the pros and cons of the identified models. The models discussed include the Generative Pre-training (GPT) and its variants, Transformer-XL, Cross-lingual Language Models (XLM), and the Bidirectional Encoder Representations from Transformers (BERT). Considering BERT’s strength and popularity in text-based emotion detection, the paper discusses recent works in which researchers proposed various BERT-based models. The survey presents its contributions, results, limitations, and datasets used. We have also provided future research directions to encourage research in text-based emotion detection using these models.
引用
收藏
页码:5789 / 5829
页数:40
相关论文
共 70 条
  • [11] Sebastiani F(2018)Semi-supervised learning for big social data analysis Neurocomputing 275 1662-41
  • [12] Baroni M(2019)exbake: Automatic fake news detection model based on bidirectional encoder representations from transformers (bert) Appl Sci 9 4062-294
  • [13] Bernardini S(2007)Using linguistic cues for the automatic recognition of personality in conversation and text J Artif Intell Res 30 457-2681
  • [14] Ferraresi A(2013)Crowdsourcing a word-emotion association lexicon Comput Intell 29 436-102
  • [15] Zanchetta E(2007)Understanding web 2.0 IT Prof 9 34-307
  • [16] Bojanowski P(2019)Language models are unsupervised multitask learners OpenAI Blog 1 9-2558
  • [17] Grave E(1980)A circumplex model of affect J Pers Soc Psychol 39 1161-undefined
  • [18] Joulin A(1977)Evidence for a three-factor theory of emotions J Res Pers 11 273-undefined
  • [19] Mikolov T(1994)Evidence for universality and cultural variation of differential emotion response patterning J Pers Soc Psychol 66 310-undefined
  • [20] Bradley MM(1997)Bidirectional recurrent neural networks IEEE Trans Signal Process 45 2673-undefined