Multi-Model Fusion Framework Using Deep Learning for Visual-Textual Sentiment Classification

被引:11
作者
Al-Tameemi, Israa K. Salman [1 ,3 ]
Feizi-Derakhshi, Mohammad-Reza [1 ]
Pashazadeh, Saeed [2 ]
Asadpour, Mohammad [2 ]
机构
[1] Univ Tabriz, Dept Comp Engn, Computerized Intelligence Syst Lab, Fac Elect & Comp Engn, Tabriz 51368, Iran
[2] Univ Tabriz, Dept Comp Engn, Fac Elect & Comp Engn, Tabriz 51368, Iran
[3] Iraqi Minist Ind & Minerals, State Co Engn Rehabil & Testing, Baghdad 10011, Iraq
来源
CMC-COMPUTERS MATERIALS & CONTINUA | 2023年 / 76卷 / 02期
关键词
Sentiment analysis; multimodal classification; deep learning; joint fusion; decision fusion; interpretability;
D O I
10.32604/cmc.2023.040997
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal Sentiment Analysis (SA) is gaining popularity due to its broad application potential. The existing studies have focused on the SA of single modalities, such as texts or photos, posing challenges in effectively handling social media data with multiple modalities. Moreover, most multimodal research has concentrated on merely combining the two modalities rather than exploring their complex correlations, leading to unsatisfactory sentiment classification results. Motivated by this, we propose a new visual-textual sentiment classification model named Multi-Model Fusion (MMF), which uses a mixed fusion framework for SA to effectively capture the essential information and the intrinsic relationship between the visual and textual content. The proposed model comprises three deep neural networks. Two different neural networks are proposed to extract the most emotionally relevant aspects of image and text data. Thus, more discriminative features are gathered for accurate sentiment classification. Then, a multichannel joint fusion model with a self-attention technique is proposed to exploit the intrinsic correlation between visual and textual characteristics and obtain emotionally rich information for joint sentiment classification. Finally, the results of the three classifiers are integrated using a decision fusion scheme to improve the robustness and generalizability of the proposed model. An interpretable visual-textual sentiment classification model is further developed using the Local Interpretable Model-agnostic Explanation model (LIME) to ensure the model's explainability and resilience. The proposed MMF model has been tested on four real-world sentiment datasets, achieving (99.78%) accuracy on Binary_Getty (BG), (99.12%) on Binary_iStock (BIS), (95.70%) on Twitter, and (79.06%) on the Multi-View Sentiment Analysis (MVSA) dataset. These results demonstrate the superior performance of our MMF model compared to single-model approaches and current state-of-the-art techniques based on model evaluation criteria.
引用
收藏
页码:2145 / 2177
页数:33
相关论文
共 74 条
[1]   Tourism recommendation system based on semantic clustering and sentiment analysis [J].
Abbasi-Moud, Zahra ;
Vahdat-Nejad, Hamed ;
Sadri, Javad .
EXPERT SYSTEMS WITH APPLICATIONS, 2021, 167
[2]   SLIC Superpixels Compared to State-of-the-Art Superpixel Methods [J].
Achanta, Radhakrishna ;
Shaji, Appu ;
Smith, Kevin ;
Lucchi, Aurelien ;
Fua, Pascal ;
Suesstrunk, Sabine .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2012, 34 (11) :2274-2281
[3]   Optimized Support Vector Machine Model for Visual Sentiment Analysis [J].
Ahammed, Shaik Afzal M. S. .
ICSPC'21: 2021 3RD INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING AND COMMUNICATION (ICPSC), 2021, :171-175
[4]  
Al-Tameemi IKS, 2023, Arxiv, DOI [arXiv:2207.02160, DOI 10.48550/ARXIV.2207.02160]
[5]   Improving Targeted Multimodal Sentiment Classification with Semantic Description of Images [J].
An, Jieyu ;
Zainon, Wan Mohd Nazmee Wan ;
Hao, Zhang .
CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 75 (03) :5801-5815
[6]  
[Anonymous], 2023, Twitter
[7]   ABCDM: An Attention-based Bidirectional CNN-RNN Deep Model for sentiment analysis [J].
Basiri, Mohammad Ehsan ;
Nemati, Shahla ;
Abdar, Moloud ;
Cambria, Erik ;
Acharya, U. Rajendra .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2021, 115 :279-294
[8]  
Borth D., 2013, P ACM INT C MULT, P223, DOI 10.1145/2502081.2502282
[9]  
Broniatowski D. A, 2021, National Institute of Standards and Technology, V8367, P56
[10]  
Cao M., 2020, Concurrency and Computation: Practice and Experience, V32, P1