SSLMM: Semi-Supervised Learning with Missing Modalities for Multimodal Sentiment Analysis

被引:0
|
作者
Wang, Yiyu [1 ]
Jian, Haifang [2 ,3 ]
Zhuang, Jian [4 ]
Guo, Huimin [2 ,3 ]
Leng, Yan [1 ]
机构
[1] Shandong Normal Univ, Sch Phys & Elect, Jinan 250358, Peoples R China
[2] Chinese Acad Sci, Lab Solid State Optoelect Informat Technol, Inst Semicond, Beijing 100083, Peoples R China
[3] Chinese Acad Sci, Beijing 100049, Peoples R China
[4] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116023, Liaoning, Peoples R China
关键词
Multimodal sentiment analysis; Semi-supervised learning; Missing modalities;
D O I
10.1016/j.inffus.2025.103058
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal Sentiment Analysis (MSA) integrates information from text, audio, and visuals to understand human emotions, but real-world applications face two challenges: (1) expensive annotation costs reduce the effectiveness of fully supervised methods, and (2) missing modality severely impact model robustness. While there are studies addressing these issues separately, few focus on solving both within a single framework. In real-world scenarios, these challenges often occur together, necessitating an algorithm that can handle both. To address this, we propose a Semi-Supervised Learning with Missing Modalities (SSLMM) framework. SSLMM combines self-supervised learning, alternating interaction information, semi-supervised learning, and modality reconstruction to tackle label scarcity and modality missing simultaneously. Firstly, SSLMM captures latent structural information through self-supervised pre-training. It then fine-tunes the model using semi- supervised learning and modality reconstruction to reduce dependence on labeled data and improve robustness to modality missing. The framework uses a graph-based architecture with an iterative message propagation mechanism to alternately propagate intra-modal and inter-modal messages, capturing emotional associations within and across modalities. Experiments on CMU-MOSI, CMU-MOSEI, and CH-SIMS demonstrate that under the condition where the proportion of labeled samples and the missing modality rate are both 0.5, SSLMM achieves binary classification (negative vs. positive) accuracies of 80.2%, 81.7%, and 77.1%, respectively, surpassing existing methods.
引用
收藏
页数:19
相关论文
共 50 条
  • [41] Introduction to semi-supervised learning
    Goldberg, Xiaojin
    Synthesis Lectures on Artificial Intelligence and Machine Learning, 2009, 6 : 1 - 116
  • [42] Human Semi-Supervised Learning
    Gibson, Bryan R.
    Rogers, Timothy T.
    Zhu, Xiaojin
    TOPICS IN COGNITIVE SCIENCE, 2013, 5 (01) : 132 - 172
  • [43] Reliable Semi-supervised Learning
    Shao, Junming
    Huang, Chen
    Yang, Qinli
    Luo, Guangchun
    2016 IEEE 16TH INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2016, : 1197 - 1202
  • [44] SEML: A Semi-Supervised Multi-Task Learning Framework for Aspect-Based Sentiment Analysis
    Li, Ning
    Chow, Chi-Yin
    Zhang, Jia-Dong
    IEEE ACCESS, 2020, 8 : 189287 - 189297
  • [45] Sentiment analysis via semi-supervised learning: a model based on dynamic threshold and multi-classifiers
    Han, Yue
    Liu, Yuhong
    Jin, Zhigang
    NEURAL COMPUTING & APPLICATIONS, 2020, 32 (09) : 5117 - 5129
  • [46] A survey on semi-supervised learning
    Van Engelen, Jesper E.
    Hoos, Holger H.
    MACHINE LEARNING, 2020, 109 (02) : 373 - 440
  • [47] On Semi-Supervised Learning and Sparsity
    Balinsky, Alexander
    Balinsky, Helen
    2009 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC 2009), VOLS 1-9, 2009, : 3083 - +
  • [48] Semi-supervised target-oriented sentiment classification
    Xu, Weidi
    Tan, Ying
    NEUROCOMPUTING, 2019, 337 : 120 - 128
  • [49] Sentiment analysis via semi-supervised learning: a model based on dynamic threshold and multi-classifiers
    Yue Han
    Yuhong Liu
    Zhigang Jin
    Neural Computing and Applications, 2020, 32 : 5117 - 5129
  • [50] Multimodal deep generative adversarial models for scalable doubly semi-supervised learning
    Du, Changde
    Du, Changying
    He, Huiguang
    INFORMATION FUSION, 2021, 68 : 118 - 130