Context-Dependent Multimodal Sentiment Analysis Based on a Complex Attention Mechanism

被引:2
作者
Deng, Lujuan [1 ]
Liu, Boyi [1 ]
Li, Zuhe [1 ]
Ma, Jiangtao [1 ]
Li, Hanbing [2 ]
机构
[1] Zhengzhou Univ Light Ind, Sch Comp & Commun Engn, Zhengzhou 450002, Peoples R China
[2] Songshan Lab, Zhengzhou 450000, Peoples R China
关键词
sentiment analysis; deep learning; complex attention mechanism; CLASSIFICATION;
D O I
10.3390/electronics12163516
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal sentiment analysis aims to understand people's attitudes and opinions from different data forms. Traditional modality fusion methods for multimodal sentiment analysis con-catenate or multiply various modalities without fully utilizing context information and the correlation between modalities. To solve this problem, this article provides a new model based on a multimodal sentiment analysis framework based on a recurrent neural network with a complex attention mechanism. First, after the raw data is preprocessed, the numerical feature representation is obtained using feature extraction. Next, the numerical features are input into the recurrent neural network, and the output results are multimodally fused using a complex attention mechanism layer. The objective of the complex attention mechanism is to leverage enhanced non-linearity to more effectively capture the inter-modal correlations, thereby improving the performance of multimodal sentiment analysis. Finally, the processed results are fed into the classification layer and the sentiment output is obtained using the classification layer. This process can effectively capture the semantic information and contextual relationship of the input sequence and fuse different pieces of modal information. Our model was tested on the CMU-MOSEI datasets, achieving an accuracy of 82.04%.
引用
收藏
页数:13
相关论文
共 50 条
[21]   A Text Sentiment Analysis Model Based on Self-Attention Mechanism [J].
Ji, Likun ;
Gong, Ping ;
Yao, Zhuyu .
2019 THE 3RD INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE COMPILATION, COMPUTING AND COMMUNICATIONS (HP3C 2019), 2019, :33-37
[22]   BAFN: Bi-Direction Attention Based Fusion Network for Multimodal Sentiment Analysis [J].
Tang, Jiajia ;
Liu, Dongjun ;
Jin, Xuanyu ;
Peng, Yong ;
Zhao, Qibin ;
Ding, Yu ;
Kong, Wanzeng .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (04) :1966-1978
[23]   Multimodal Mutual Attention-Based Sentiment Analysis Framework Adapted to Complicated Contexts [J].
He, Lijun ;
Wang, Ziqing ;
Wang, Liejun ;
Li, Fan .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (12) :7131-7143
[24]   Joint Modal Circular Complementary Attention for Multimodal Aspect-Based Sentiment Analysis [J].
Liu, Hao ;
He, Lijun ;
Liang, Jiaxi .
2024 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS, ICMEW 2024, 2024,
[25]   Self-adaptive attention fusion for multimodal aspect-based sentiment analysis [J].
Wang, Ziyue ;
Guo, Junjun .
MATHEMATICAL BIOSCIENCES AND ENGINEERING, 2024, 21 (01) :1305-1320
[26]   Sentiment analysis based on aspect and context fusion using attention encoder with LSTM [J].
Soni J. ;
Mathur K. .
International Journal of Information Technology, 2022, 14 (7) :3611-3618
[27]   A comprehensive survey on deep learning-based approaches for multimodal sentiment analysis [J].
Ghorbanali, Alireza ;
Sohrabi, Mohammad Karim .
ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (SUPPL 1) :1479-1512
[28]   Arabic language investigation in the context of unimodal and multimodal sentiment analysis [J].
Youcef, Fatima Zohra ;
Barigou, Fatiha .
2021 22ND INTERNATIONAL ARAB CONFERENCE ON INFORMATION TECHNOLOGY (ACIT), 2021, :19-25
[29]   Multimodal sentiment analysis using hierarchical fusion with context modeling [J].
Majumder, N. ;
Hazarika, D. ;
Gelbukh, A. ;
Cambria, E. ;
Poria, S. .
KNOWLEDGE-BASED SYSTEMS, 2018, 161 :124-133
[30]   Multi-Modal Sentiment Analysis Based on Image and Text Fusion Based on Cross-Attention Mechanism [J].
Li, Hongchan ;
Lu, Yantong ;
Zhu, Haodong .
ELECTRONICS, 2024, 13 (11)