Multi-task Representation Learning for Enhanced Emotion Categorization in Short Text

被引:10
|
作者
Sen, Anirban [1 ]
Sinha, Manjira [2 ]
Mannarswamy, Sandya [2 ]
Roy, Shourya [3 ]
机构
[1] IIT Delhi, Comp Sci & Engn Dept, New Delhi, India
[2] Conduent Labs India, Bangalore, Karnataka, India
[3] Amer Express, Big Data Labs, New York, NY USA
关键词
Multi-tasking; Emotion prediction; Representation learning; Joint learning;
D O I
10.1007/978-3-319-57529-2_26
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Embedding based dense contextual representations of data have proven to be efficient in various NLP tasks as they alleviate the burden of heavy feature engineering. However, generalized representation learning approaches do not capture the task specific subtleties. In addition, often the computational model for each task is developed in isolation, overlooking the interrelation among certain NLP tasks. Given that representation learning typically requires a good amount of labeled annotated data which is scarce, it is essential to explore learning embedding under supervision of multiple related tasks jointly and at the same time, incorporating the task specific attributes too. Inspired by the basic premise of multi-task learning, which supposes that correlation between related tasks can be used to improve classification, we propose a novel technique for building jointly learnt task specific embeddings for emotion and sentiment prediction tasks. Here, a sentiment prediction task acts as an auxiliary input to enhance the primary emotion prediction task. Our experimental results demonstrate that embeddings learnt under supervised signals of two related tasks, outperform embeddings learnt in a uni-tasked setup for the downstream task of emotion prediction.
引用
收藏
页码:324 / 336
页数:13
相关论文
共 50 条
  • [31] Emotion recognition in conversations with emotion shift detection based on multi-task learning
    Gao, Qingqing
    Cao, Biwei
    Guan, Xin
    Gu, Tianyun
    Bao, Xing
    Wu, Junyan
    Liu, Bo
    Cao, Jiuxin
    KNOWLEDGE-BASED SYSTEMS, 2022, 248
  • [32] Unifying Defect Prediction, Categorization, and Repair by Multi-task Deep Learning
    Ni, Chao
    Yang, Kaiwen
    Zhu, Yan
    Chen, Xiang
    Yang, Xiaohu
    2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 1980 - 1992
  • [33] Multi-Task Active Learning for Simultaneous Emotion Classification and Regression
    Jiang, Xue
    Meng, Lubin
    Wu, Dongrui
    2021 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2021, : 1947 - 1952
  • [34] Multi-task gradient descent for multi-task learning
    Lu Bai
    Yew-Soon Ong
    Tiantian He
    Abhishek Gupta
    Memetic Computing, 2020, 12 : 355 - 369
  • [35] Multi-task gradient descent for multi-task learning
    Bai, Lu
    Ong, Yew-Soon
    He, Tiantian
    Gupta, Abhishek
    MEMETIC COMPUTING, 2020, 12 (04) : 355 - 369
  • [36] Robust Lifelong Multi-task Multi-view Representation Learning
    Sun, Gan
    Cong, Yang
    Li, Jun
    Fu, Yun
    2018 9TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK), 2018, : 91 - 98
  • [37] Ask the GRU: Multi-task Learning for Deep Text Recommendations
    Bansal, Trapit
    Belanger, David
    McCallum, Andrew
    PROCEEDINGS OF THE 10TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'16), 2016, : 107 - 114
  • [38] Multi-task Learning with Bidirectional Language Models for Text Classification
    Yang, Qi
    Shang, Lin
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [39] Multi-task learning for historical text normalization: Size matters
    Bollmann, Marcel
    Sogaard, Anders
    Bingel, Joachim
    DEEP LEARNING APPROACHES FOR LOW-RESOURCE NATURAL LANGUAGE PROCESSING (DEEPLO), 2018, : 19 - 24
  • [40] Power text information extraction based on multi-task learning
    Ji, Xin
    Wu, Tongxin
    Yu, Ting
    Dong, Linxiao
    Chen, Yiting
    Mi, Na
    Zhao, Jiakui
    Beijing Hangkong Hangtian Daxue Xuebao/Journal of Beijing University of Aeronautics and Astronautics, 2024, 50 (08): : 2461 - 2469