Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities

被引:25
|
作者
Zeng, Jiandian [1 ]
Liu, Tianyi [2 ]
Zhou, Jiantao [1 ]
机构
[1] City Univ Macau, State Key Lab IoT Smart, Macau, Peoples R China
[2] Shanghai Jiao Tong Univ, Shanghai, Peoples R China
来源
PROCEEDINGS OF THE 45TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '22) | 2022年
关键词
Multimodal Sentiment Analysis; Missing Modality; Joint Representation;
D O I
10.1145/3477495.3532064
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal sentiment analysis has been studied under the assumption that all modalities are available. However, such a strong assumption does not always hold in practice, and most of multimodal fusion models may fail when partial modalities are missing. Several works have addressed the missing modality problem; but most of them only considered the single modality missing case, and ignored the practically more general cases of multiple modalities missing. To this end, in this paper, we propose a Tag-Assisted Transformer Encoder (TATE) network to handle the problem of missing uncertain modalities. Specifically, we design a tag encoding module to cover both the single modality and multiple modalities missing cases, so as to guide the network's attention to those missing modalities. Besides, we adopt a new space projection pattern to align common vectors. Then, a Transformer encoder-decoder network is utilized to learn the missing modality features. At last, the outputs of the Transformer encoder are used for the final sentiment classification. Extensive experiments are conducted on CMU-MOSI and IEMO-CAP datasets, showing that our method can achieve significant improvements compared with several baselines.
引用
收藏
页码:1545 / 1554
页数:10
相关论文
共 17 条
  • [1] Robust Multimodal Sentiment Analysis via Tag Encoding of Uncertain Missing Modalities
    Zeng, Jiandian
    Zhou, Jiantao
    Liu, Tianyi
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 6301 - 6314
  • [2] Similar modality completion-based multimodal sentiment analysis under uncertain missing modalities
    Sun, Yuhang
    Liu, Zhizhong
    Sheng, Quan Z.
    Chu, Dianhui
    Yu, Jian
    Sun, Hongxiang
    INFORMATION FUSION, 2024, 110
  • [3] Towards Robust Multimodal Sentiment Analysis Under Uncertain Signal Missing
    Li, Mingcheng
    Yang, Dingkang
    Zhang, Lihua
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1497 - 1501
  • [4] Robust Multimodal Representation under Uncertain Missing Modalities
    Lan, Guilin
    Du, Yeqian
    Yang, Zhouwang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2025, 21 (01)
  • [5] Modality translation-based multimodal sentiment analysis under uncertain modalities
    Liu, Zhizhong
    Zhou, Bin
    Chu, Dianhui
    Sun, Yuhang
    Meng, Lingqiang
    INFORMATION FUSION, 2024, 101
  • [6] UniMF: A Unified Multimodal Framework for Multimodal Sentiment Analysis in Missing Modalities and Unaligned Multimodal Sequences
    Huan, Ruohong
    Zhong, Guowei
    Chen, Peng
    Liang, Ronghua
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 5753 - 5768
  • [7] SSLMM: Semi-Supervised Learning with Missing Modalities for Multimodal Sentiment Analysis
    Wang, Yiyu
    Jian, Haifang
    Zhuang, Jian
    Guo, Huimin
    Leng, Yan
    INFORMATION FUSION, 2025, 120
  • [8] Multimodal sentiment analysis based on multi-stage graph fusion networks under random missing modality conditions
    Zhang, Ting
    Song, Bin
    Zhang, Zhiyong
    Zhang, Yajuan
    IET IMAGE PROCESSING, 2025, 19 (01)
  • [9] Prompt-matching synthesis model for missing modalities in sentiment analysis
    Liu, Jiaqi
    Wang, Yong
    Yang, Jing
    Shang, Fanshu
    He, Fan
    KNOWLEDGE-BASED SYSTEMS, 2025, 318
  • [10] AOBERT: All-modalities-in-One BERT for multimodal sentiment analysis
    Kim, Kyeonghun
    Park, Sanghyun
    INFORMATION FUSION, 2023, 92 : 37 - 45