Hybrid Contrastive Learning of Tri-Modal Representation for Multimodal Sentiment Analysis

被引:65
作者
Mai, Sijie [1 ]
Zeng, Ying [1 ]
Zheng, Shuangjia [1 ]
Hu, Haifeng [2 ]
机构
[1] Sun Yat Sen Univ, Guangzhou 510275, Peoples R China
[2] Sun Yat Sen Univ, Sch Elect & Informat Technol, Guangzhou 510275, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Multimodal sentiment analysis; supervised contrastive learning; representation learning; multimodal learning; FUSION; LANGUAGE;
D O I
10.1109/TAFFC.2022.3172360
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The wide application of smart devices enables the availability of multimodal data, which can be utilized in many tasks. In the field of multimodal sentiment analysis, most previous works focus on exploring intra- and inter-modal interactions. However, training a network with cross-modal information (language, audio and visual) is still challenging due to the modality gap. Besides, while learning dynamics within each sample draws great attention, the learning of inter-sample and inter-class relationships is neglected. Moreover, the size of datasets limits the generalization ability of the models. To address the afore-mentioned issues, we propose a novel framework HyCon for hybrid contrastive learning of tri-modal representation. Specifically, we simultaneously perform intra-/inter-modal contrastive learning and semi-contrastive learning, with which the model can fully explore cross-modal interactions, learn inter-sample and inter-class relationships, and reduce the modality gap. Besides, refinement term and modality margin are introduced to enable a better learning of unimodal pairs. Moreover, we devise pair selection mechanism to identify and assign weights to the informative negative and positive pairs. HyCon can naturally generate many training pairs for better generalization and reduce the negative effect of limited datasets. Extensive experiments demonstrate that our method outperforms baselines on multimodal sentiment analysis and emotion recognition.
引用
收藏
页码:2276 / 2289
页数:14
相关论文
共 65 条
  • [51] Wang YS, 2019, AAAI CONF ARTIF INTE, P7216
  • [52] Weinberger KQ, 2009, J MACH LEARN RES, V10, P207
  • [53] YouTube Movie Reviews: Sentiment Analysis in an Audio-Visual Context
    Woellmer, Martin
    Weninger, Felix
    Knaup, Tobias
    Schuller, Bjoern
    Sun, Congkai
    Sagae, Kenji
    Morency, Louis-Philippe
    [J]. IEEE INTELLIGENT SYSTEMS, 2013, 28 (03) : 46 - 53
  • [54] Emotion Recognition of Affective Speech Based on Multiple Classifiers Using Acoustic-Prosodic Information and Semantic Labels
    Wu, Chung-Hsien
    Liang, Wei-Bin
    [J]. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2011, 2 (01) : 10 - 21
  • [55] Wu M., 2021, P INT C LEARN REPR
  • [56] CM-BERT: Cross-Modal BERT for Text-Audio Sentiment Analysis
    Yang, Kaicheng
    Xu, Hua
    Gao, Kai
    [J]. MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 521 - 528
  • [57] Yang ZL, 2019, ADV NEUR IN, V32
  • [58] Yonglong Tian, 2020, Computer Vision - ECCV 2020 16th European Conference. Proceedings. Lecture Notes in Computer Science (LNCS 12356), P776, DOI 10.1007/978-3-030-58621-8_45
  • [59] Zadeh A., 2017, P 2017 C EMP METH NA, P1103, DOI DOI 10.18653/V1/D17-1115
  • [60] Zadeh A, 2018, PROCEEDINGS OF THE 56TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL), VOL 1, P2236