Multimodal sentiment analysis with unimodal label generation and modality decomposition

被引:2
作者
Zhu, Linan [1 ]
Zhao, Hongyan [1 ]
Zhu, Zhechao [1 ]
Zhang, Chenwei [2 ]
Kong, Xiangjie [1 ]
机构
[1] Zhejiang Univ Technol, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[2] Univ Hong Kong, Sch Fac Educ, Hong Kong, Peoples R China
基金
中国国家自然科学基金;
关键词
Multimodal sentiment analysis; Unimodal label generation; Modality decomposition; FUSION;
D O I
10.1016/j.inffus.2024.102787
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multimodal sentiment analysis aims to combine information from different modalities to enhance the understanding of emotions and achieve accurate prediction. However, existing methods face issues of information redundancy and modality heterogeneity during the fusion process, and common multimodal sentiment analysis datasets lack unimodal labels. To address these issues, this paper proposes a multimodal sentiment analysis approach based on unimodal label generation and modality decomposition (ULMD). This method employs a multi-task learning framework, dividing the multimodal sentiment analysis task into a multimodal task and three unimodal tasks. Additionally, a modality representation separator is introduced to decompose modality representations into modality-invariant representations and modality-specific representations. This approach explores the fusion between modalities and generates unimodal labels to enhance the performance of the multimodal sentiment analysis task. Extensive experiments on two public benchmark datasets demonstrate the effectiveness of this method.
引用
收藏
页数:10
相关论文
共 50 条
[21]   MSFNet: modality smoothing fusion network for multimodal aspect-based sentiment analysis [J].
Xiang, Yan ;
Cai, Yunjia ;
Guo, Junjun .
FRONTIERS IN PHYSICS, 2023, 11
[22]   CSMF-SPC: Multimodal Sentiment Analysis Model with Effective Context Semantic Modality Fusion and Sentiment Polarity Correction [J].
Li, Yuqiang ;
Weng, Wenxuan ;
Liu, Chun ;
Li, Lin .
PATTERN ANALYSIS AND APPLICATIONS, 2024, 27 (03)
[23]   AdaMoW: Multimodal Sentiment Analysis Based on Adaptive Modality-Specific Weight Fusion Network [J].
Zhang, Junling ;
Wu, Xuemei ;
Huang, Changqin .
IEEE ACCESS, 2023, 11 :48410-48420
[24]   Multi-layer cross-modality attention fusion network for multimodal sentiment analysis [J].
Yin Z. ;
Du Y. ;
Liu Y. ;
Wang Y. .
Multimedia Tools and Applications, 2024, 83 (21) :60171-60187
[25]   Similar modality completion-based multimodal sentiment analysis under uncertain missing modalities [J].
Sun, Yuhang ;
Liu, Zhizhong ;
Sheng, Quan Z. ;
Chu, Dianhui ;
Yu, Jian ;
Sun, Hongxiang .
INFORMATION FUSION, 2024, 110
[26]   Disentanglement Translation Network for multimodal sentiment analysis [J].
Zeng, Ying ;
Yan, Wenjun ;
Mai, Sijie ;
Hu, Haifeng .
INFORMATION FUSION, 2024, 102
[27]   A Survey on Multimodal Sentiment Analysis [J].
Zhang Y. ;
Rong L. ;
Song D. ;
Zhang P. .
Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence, 2020, 33 (05) :426-438
[28]   Benchmarking Multimodal Sentiment Analysis [J].
Cambria, Erik ;
Hazarika, Devamanyu ;
Poria, Soujanya ;
Hussain, Amir ;
Subramanyam, R. B. V. .
COMPUTATIONAL LINGUISTICS AND INTELLIGENT TEXT PROCESSING, CICLING 2017, PT II, 2018, 10762 :166-179
[29]   Multimodal sentiment analysis: A survey [J].
Lai, Songning ;
Hu, Xifeng ;
Xu, Haoxuan ;
Ren, Zhaoxia ;
Liu, Zhi .
DISPLAYS, 2023, 80
[30]   Modality-invariant temporal representation learning for multimodal sentiment classification [J].
Sun, Hao ;
Liu, Jiaqing ;
Chen, Yen-Wei ;
Lin, Lanfen .
INFORMATION FUSION, 2023, 91 :504-514