Learning from the global view: Supervised contrastive learning of multimodal representation

被引:9
作者
Mai, Sijie [1 ]
Zeng, Ying [1 ]
Hu, Haifeng [1 ]
机构
[1] Sun Yat Sen Univ, Sch Elect & Informat Technol, Guangzhou 510006, Guangdong, Peoples R China
关键词
Multimodal sentiment analysis; Multimodal representation learning; Contrastive learning; Multimodal humor detection; FUSION;
D O I
10.1016/j.inffus.2023.101920
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The development of technology enables the availability of abundant multimodal data, which can be utilized in many representation learning tasks. However, most methods ignore the rich modality correlation information stored in each multimodal object and fail to fully exploit the potential of multimodal data. To address the aforementioned issue, cross-modal contrastive learning methods are proposed to learn the similarity score of each modality pair in a self-/weakly-supervised manner and improve the model robustness. Though effective, contrastive learning based on unimodal representations might be, in some cases, inaccurate as unimodal representations fail to reveal the global information of multimodal objects. To this end, we propose a contrastive learning pipeline based on multimodal representations to learn from the global view, and devise multiple techniques to generate negative and positive samples for each anchor. To generate positive samples, we apply the mix-up operation to mix two multimodal representations of different objects that have the maximal label similarity. Moreover, we devise a permutation-invariant fusion mechanism to define the positive samples by permuting the input order of modalities for fusion and sampling various contrastive fusion networks. In this way, we force the multimodal representation to be invariant regarding the order of modalities and the structures of fusion networks, so that the model can capture high-level semantic information of multimodal objects. To define negative samples, for each modality, we randomly replace the unimodal representation with that from another dissimilar object when synthesizing the multimodal representation. By this means, the model is led to capture the high-level concurrence information and correspondence relationship between modalities within each object. We also directly define the multimodal representation from another object as a negative sample, where the chosen object shares the minimal label similarity with the anchor. The label information is leveraged in the proposed framework to learn a more discriminative multimodal embedding space for downstream tasks. Extensive experiments demonstrate that our method outperforms previous state-of-the-art baselines on the tasks of multimodal sentiment analysis and humor detection.
引用
收藏
页数:14
相关论文
共 89 条
  • [1] Akhtar MS, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P370
  • [2] Amrani E, 2021, AAAI CONF ARTIF INTE, V35, P6644
  • [3] [Anonymous], 2015, ICLR
  • [4] Multimodal Machine Learning: A Survey and Taxonomy
    Baltrusaitis, Tadas
    Ahuja, Chaitanya
    Morency, Louis-Philippe
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (02) : 423 - 443
  • [5] OpenFace 2.0: Facial Behavior Analysis Toolkit
    Baltrusaitis, Tadas
    Zadeh, Amir
    Lim, Yao Chong
    Morency, Louis-Philippe
    [J]. PROCEEDINGS 2018 13TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION (FG 2018), 2018, : 59 - 66
  • [6] Behmanesh M, 2021, Arxiv, DOI arXiv:2111.13361
  • [7] Chauhan DS, 2020, 58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020), P4351
  • [8] Chen T, 2020, Arxiv, DOI arXiv:2002.05709
  • [9] Choi Hyeong Kyu, 2022, Advances in Neural Information Processing Systems
  • [10] Chuang CY, 2022, Arxiv, DOI arXiv:2201.04309