Multi-task Representation Learning for Enhanced Emotion Categorization in Short Text

被引:10
|
作者
Sen, Anirban [1 ]
Sinha, Manjira [2 ]
Mannarswamy, Sandya [2 ]
Roy, Shourya [3 ]
机构
[1] IIT Delhi, Comp Sci & Engn Dept, New Delhi, India
[2] Conduent Labs India, Bangalore, Karnataka, India
[3] Amer Express, Big Data Labs, New York, NY USA
关键词
Multi-tasking; Emotion prediction; Representation learning; Joint learning;
D O I
10.1007/978-3-319-57529-2_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Embedding based dense contextual representations of data have proven to be efficient in various NLP tasks as they alleviate the burden of heavy feature engineering. However, generalized representation learning approaches do not capture the task specific subtleties. In addition, often the computational model for each task is developed in isolation, overlooking the interrelation among certain NLP tasks. Given that representation learning typically requires a good amount of labeled annotated data which is scarce, it is essential to explore learning embedding under supervision of multiple related tasks jointly and at the same time, incorporating the task specific attributes too. Inspired by the basic premise of multi-task learning, which supposes that correlation between related tasks can be used to improve classification, we propose a novel technique for building jointly learnt task specific embeddings for emotion and sentiment prediction tasks. Here, a sentiment prediction task acts as an auxiliary input to enhance the primary emotion prediction task. Our experimental results demonstrate that embeddings learnt under supervised signals of two related tasks, outperform embeddings learnt in a uni-tasked setup for the downstream task of emotion prediction.
引用
收藏
页码:324 / 336
页数:13
相关论文
共 50 条
  • [1] Enhanced representation and multi-task learning for image annotation
    Binder, Alexander
    Samek, Wojciech
    Mueller, Klaus-Robert
    Kawanabe, Motoaki
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2013, 117 (05) : 466 - 478
  • [2] Multi-task learning using a hybrid representation for text classification
    Guangquan Lu
    Jiangzhang Gan
    Jian Yin
    Zhiping Luo
    Bo Li
    Xishun Zhao
    Neural Computing and Applications, 2020, 32 : 6467 - 6480
  • [3] Multi-task learning using a hybrid representation for text classification
    Lu, Guangquan
    Gan, Jiangzhang
    Yin, Jian
    Luo, Zhiping
    Li, Bo
    Zhao, Xishun
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (11): : 6467 - 6480
  • [4] A Multi-Task Representation Learning Architecture for Enhanced Graph Classification
    Xie, Yu
    Gong, Maoguo
    Gao, Yuan
    Qin, A. K.
    Fan, Xiaolong
    FRONTIERS IN NEUROSCIENCE, 2020, 13
  • [5] Text Emotion Distribution Learning via Multi-Task Convolutional Neural Network
    Zhang, Yuxiang
    Fu, Jiamei
    She, Dongyu
    Zhang, Ying
    Wang, Senzhang
    Yang, Jufeng
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 4595 - 4601
  • [6] Active Multi-Task Representation Learning
    Chen, Yifang
    Du, Simon S.
    Jamieson, Kevin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [7] Multi-Task Network Representation Learning
    Xie, Yu
    Jin, Peixuan
    Gong, Maoguo
    Zhang, Chen
    Yu, Bin
    FRONTIERS IN NEUROSCIENCE, 2020, 14
  • [8] Speech Emotion Recognition with Multi-task Learning
    Cai, Xingyu
    Yuan, Jiahong
    Zheng, Renjie
    Huang, Liang
    Church, Kenneth
    INTERSPEECH 2021, 2021, : 4508 - 4512
  • [9] Multi-task Learning for Speech Emotion and Emotion Intensity Recognition
    Yue, Pengcheng
    Qu, Leyuan
    Zheng, Shukai
    Li, Taihao
    PROCEEDINGS OF 2022 ASIA-PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE (APSIPA ASC), 2022, : 1232 - 1237
  • [10] Adversarial Multi-task Learning for Text Classification
    Liu, Pengfei
    Qiu, Xipeng
    Huang, Xuanjing
    PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 1 - 10