Learning Modality Consistency and Difference Information with Multitask Learning for Multimodal Sentiment Analysis

被引:1
|
作者
Fang, Cheng [1 ,2 ]
Liang, Feifei [3 ]
Li, Tianchi [2 ]
Guan, Fangheng [2 ]
机构
[1] Civil Aviat Univ China, Key Lab Civil Aviat Thermal Hazards Prevent & Emer, Tianjin 300300, Peoples R China
[2] Civil Aviat Univ China, Coll Elect Informat & Automat, Tianjin 300300, Peoples R China
[3] China FAW Nanjing Technol Dev Co Ltd, Nanjing 211100, Peoples R China
关键词
multimodal sentiment analysis; attention mechanism; multitask learning; adversarial training;
D O I
10.3390/fi16060213
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The primary challenge in Multimodal sentiment analysis (MSA) lies in developing robust joint representations that can effectively learn mutual information from diverse modalities. Previous research in this field tends to rely on feature concatenation to obtain joint representations. However, these approaches fail to fully exploit interactive patterns to ensure consistency and differentiation across different modalities. To address this limitation, we propose a novel framework for multimodal sentiment analysis, named CDML (Consistency and Difference using a Multitask Learning network). Specifically, CDML uses an attention mechanism to assign the attention weights of each modality efficiently. Adversarial training is used to obtain consistent information between modalities. Finally, the difference among the modalities is acquired by the multitask learning framework. Experiments on two benchmark MSA datasets, CMU-MOSI and CMU-MOSEI, showcase that our proposed method outperforms the seven existing approaches by at least 1.3% for Acc-2 and 1.7% for F1.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Multimodal Sentiment Analysis: A Multitask Learning Approach
    Fortin, Mathieu Page
    Chaib-draa, Brahim
    ICPRAM: PROCEEDINGS OF THE 8TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION APPLICATIONS AND METHODS, 2019, : 368 - 376
  • [2] A Multitask Learning Framework for Multimodal Sentiment Analysis
    Jiang, Dazhi
    Wei, Runguo
    Liu, Hao
    Wen, Jintao
    Tu, Geng
    Zheng, Lin
    Cambria, Erik
    21ST IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS ICDMW 2021, 2021, : 151 - 157
  • [3] Dynamic Weighted Multitask Learning and Contrastive Learning for Multimodal Sentiment Analysis
    Wang, Xingqi
    Zhang, Mengrui
    Chen, Bin
    Wei, Dan
    Shao, Yanli
    ELECTRONICS, 2023, 12 (13)
  • [4] Learning Multitask Commonness and Uniqueness for Multimodal Sarcasm Detection and Sentiment Analysis in Conversation
    Zhang Y.
    Yu Y.
    Zhao D.
    Li Z.
    Wang B.
    Hou Y.
    Tiwari P.
    Qin J.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (03): : 1349 - 1361
  • [5] Cross-modality Representation Interactive Learning For Multimodal Sentiment Analysis
    Huang, Jian
    Ji, Yanli
    Yang, Yang
    Shen, Heng Tao
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 426 - 434
  • [6] Multitask Learning for Complaint Identification and Sentiment Analysis
    Apoorva Singh
    Sriparna Saha
    Md. Hasanuzzaman
    Kuntal Dey
    Cognitive Computation, 2022, 14 : 212 - 227
  • [7] Multitask Learning for Complaint Identification and Sentiment Analysis
    Singh, Apoorva
    Saha, Sriparna
    Hasanuzzaman, Md.
    Dey, Kuntal
    COGNITIVE COMPUTATION, 2022, 14 (01) : 212 - 227
  • [8] A Multitask learning model for multimodal sarcasm, sentiment and emotion recognition in conversations
    Zhang, Yazhou
    Wang, Jinglin
    Liu, Yaochen
    Rong, Lu
    Zheng, Qian
    Song, Dawei
    Tiwari, Prayag
    Qin, Jing
    INFORMATION FUSION, 2023, 93 : 282 - 301
  • [9] Self-HCL: Self-Supervised Multitask Learning with Hybrid Contrastive Learning Strategy for Multimodal Sentiment Analysis
    Fu, Youjia
    Fu, Junsong
    Xue, Huixia
    Xu, Zihao
    ELECTRONICS, 2024, 13 (14)
  • [10] Embodied Multimodal Multitask Learning
    Chaplot, Devendra Singh
    Lee, Lisa
    Salakhutdinov, Ruslan
    Parikh, Devi
    Batra, Dhruv
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2442 - 2448