Multimodal sentiment analysis for social media contents during public emergencies

被引:4
作者
Fan, Tao [1 ,2 ]
Wang, Hao [1 ]
Wu, Peng [2 ]
Ling, Chen [2 ]
Ahvanooey, Milad Taleby [1 ]
机构
[1] Nanjing Univ, Sch Informat Management, Nanjing, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Econ & Management, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
Public emergency; Multimodal sentiment analysis; Social platform; Textual sentiment analysis; Visual sentiment analysis; CLASSIFICATION; LSTM;
D O I
10.2478/jdis-2023-0012
中图分类号
G25 [图书馆学、图书馆事业]; G35 [情报学、情报工作];
学科分类号
1205 ; 120501 ;
摘要
Purpose: Nowadays, public opinions during public emergencies involve not only textual contents but also contain images. However, the existing works mainly focus on textual contents and they do not provide a satisfactory accuracy of sentiment analysis, lacking the combination of multimodal contents. In this paper, we propose to combine texts and images generated in the social media to perform sentiment analysis. Design/methodology/approach: We propose a Deep Multimodal Fusion Model (DMFM), which combines textual and visual sentiment analysis. We first train word2vec model on a large-scale public emergency corpus to obtain semantic-rich word vectors as the input of textual sentiment analysis. BiLSTM is employed to generate encoded textual embeddings. To fully excavate visual information from images, a modified pretrained VGG16-based sentiment analysis network is used with the best-performed fine-tuning strategy. A multimodal fusion method is implemented to fuse textual and visual embeddings completely, producing predicted labels. Findings: We performed extensive experiments on Weibo and Twitter public emergency datasets, to evaluate the performance of our proposed model. Experimental results demonstrate that the DMFM provides higher accuracy compared with baseline models. The introduction of images can boost the performance of sentiment analysis during public emergencies. Research limitations: In the future, we will test our model in a wider dataset. We will also consider a better way to learn the multimodal fusion information. Practical implications: We build an efficient multimodal sentiment analysis model for the social media contents during public emergencies. Originality/value: We consider the images posted by online users during public emergencies on social platforms. The proposed method can present a novel scope for sentiment analysis during public emergencies and provide the decision support for the government when formulating policies in public emergencies.
引用
收藏
页码:61 / 87
页数:27
相关论文
共 47 条
[1]   Deep learning-based sentiment classification of evaluative text based on Multi-feature fusion [J].
Abdi, Asad ;
Shamsuddin, Siti Mariyam ;
Hasan, Shafaatunnur ;
Piran, Jalil .
INFORMATION PROCESSING & MANAGEMENT, 2019, 56 (04) :1245-1259
[2]   Multimodal Attentive Fusion Network for audio-visual event recognition [J].
Brousmiche, Mathilde ;
Rouat, Jean ;
Dupont, Stephane .
INFORMATION FUSION, 2022, 85 :52-59
[3]   Convolutional Neural Networks for Multimedia Sentiment Analysis [J].
Cai, Guoyong ;
Xia, Binbin .
NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, NLPCC 2015, 2015, 9362 :159-167
[4]   Affective Computing and Sentiment Analysis [J].
Cambria, Erik .
IEEE INTELLIGENT SYSTEMS, 2016, 31 (02) :102-107
[5]  
Cambria E, 2013, PROCEEDINGS OF THE 2013 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE FOR HUMAN-LIKE INTELLIGENCE (CIHLI), P108, DOI 10.1109/CIHLI.2013.6613272
[6]   From pixels to sentiment: Fine-tuning CNNs for visual sentiment prediction [J].
Campos, Victor ;
Jou, Brendan ;
Giro-i-Nieto, Xavier .
IMAGE AND VISION COMPUTING, 2017, 65 :15-22
[7]   Hsa_circ_0005548 knockdown repairs OGD/R-induced damage in human brain microvascular endothelial cells via miR-362-3p/ETS1 axis [J].
Chen, Chunlei ;
Xu, Jiguo ;
Huang, Tianrun ;
Qian, Zhuolei .
INTERNATIONAL JOURNAL OF NEUROSCIENCE, 2024, 134 (10) :1139-1148
[8]  
Chen T., 2014, ARXIV14108586, DOI DOI 10.48550/ARXIV.1410.8586
[9]   Improving sentiment analysis via sentence type classification using BiLSTM-CRF and CNN [J].
Chen, Tao ;
Xu, Ruifeng ;
He, Yulan ;
Wang, Xuan .
EXPERT SYSTEMS WITH APPLICATIONS, 2017, 72 :221-230
[10]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848