SSLMM: Semi-Supervised Learning with Missing Modalities for Multimodal Sentiment Analysis

被引:0
|
作者
Wang, Yiyu [1 ]
Jian, Haifang [2 ,3 ]
Zhuang, Jian [4 ]
Guo, Huimin [2 ,3 ]
Leng, Yan [1 ]
机构
[1] Shandong Normal Univ, Sch Phys & Elect, Jinan 250358, Peoples R China
[2] Chinese Acad Sci, Lab Solid State Optoelect Informat Technol, Inst Semicond, Beijing 100083, Peoples R China
[3] Chinese Acad Sci, Beijing 100049, Peoples R China
[4] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116023, Liaoning, Peoples R China
关键词
Multimodal sentiment analysis; Semi-supervised learning; Missing modalities;
D O I
10.1016/j.inffus.2025.103058
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal Sentiment Analysis (MSA) integrates information from text, audio, and visuals to understand human emotions, but real-world applications face two challenges: (1) expensive annotation costs reduce the effectiveness of fully supervised methods, and (2) missing modality severely impact model robustness. While there are studies addressing these issues separately, few focus on solving both within a single framework. In real-world scenarios, these challenges often occur together, necessitating an algorithm that can handle both. To address this, we propose a Semi-Supervised Learning with Missing Modalities (SSLMM) framework. SSLMM combines self-supervised learning, alternating interaction information, semi-supervised learning, and modality reconstruction to tackle label scarcity and modality missing simultaneously. Firstly, SSLMM captures latent structural information through self-supervised pre-training. It then fine-tunes the model using semi- supervised learning and modality reconstruction to reduce dependence on labeled data and improve robustness to modality missing. The framework uses a graph-based architecture with an iterative message propagation mechanism to alternately propagate intra-modal and inter-modal messages, capturing emotional associations within and across modalities. Experiments on CMU-MOSI, CMU-MOSEI, and CH-SIMS demonstrate that under the condition where the proportion of labeled samples and the missing modality rate are both 0.5, SSLMM achieves binary classification (negative vs. positive) accuracies of 80.2%, 81.7%, and 77.1%, respectively, surpassing existing methods.
引用
收藏
页数:19
相关论文
共 50 条
  • [31] Robust Multimodal Sentiment Analysis via Tag Encoding of Uncertain Missing Modalities
    Zeng, Jiandian
    Zhou, Jiantao
    Liu, Tianyi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6301 - 6314
  • [32] Semi-supervised Sentiment Annotation of Large Corpora
    Brum, Henrico Bertini
    Volpe Nunes, Maria das Gracas
    COMPUTATIONAL PROCESSING OF THE PORTUGUESE LANGUAGE, PROPOR 2018, 2018, 11122 : 385 - 395
  • [33] Multimodal Semi-supervised Learning Framework for Punctuation Prediction in Conversational Speech
    Sunkara, Monica
    Ronanki, Srikanth
    Bekal, Dhanush
    Bodapati, Sravan
    Kirchhoff, Katrin
    INTERSPEECH 2020, 2020, : 4911 - 4915
  • [34] DOUBLY SEMI-SUPERVISED MULTIMODAL ADVERSARIAL LEARNING FOR CLASSIFICATION, GENERATION AND RETRIEVAL
    Du, Changde
    Du, Changying
    He, Huiguang
    2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2019, : 13 - 18
  • [35] Semi-Supervised Multimodal Deep Learning Model for Polarity Detection in Arguments
    Ange, Tato
    Roger, Nkambou
    Aude, Dufresne
    Claude, Frasson
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [36] Set-Similarity Joins Based Semi-supervised Sentiment Analysis
    Dong, Xishuang
    Zou, Qibo
    Guan, Yi
    NEURAL INFORMATION PROCESSING, ICONIP 2012, PT I, 2012, 7663 : 176 - 183
  • [37] Semi-supervised Learning for Sentiment Classification using Small Number of Labeled Data
    Lee, Vivian Lay Shan
    Gan, Keng Hoon
    Tan, Tien Ping
    Abdullah, Rosni
    FIFTH INFORMATION SYSTEMS INTERNATIONAL CONFERENCE, 2019, 161 : 577 - 584
  • [38] Semi-supervised learning by disagreement
    Zhi-Hua Zhou
    Ming Li
    Knowledge and Information Systems, 2010, 24 : 415 - 439
  • [39] A survey on semi-supervised learning
    Jesper E. van Engelen
    Holger H. Hoos
    Machine Learning, 2020, 109 : 373 - 440
  • [40] Semi-supervised learning by disagreement
    Zhou, Zhi-Hua
    Li, Ming
    KNOWLEDGE AND INFORMATION SYSTEMS, 2010, 24 (03) : 415 - 439