Deep Learning Techniques for Polarity Classification in Multimodal Sentiment Analysis

被引:29
作者
Mahendhiran, P. D. [1 ]
Kannimuthu, S. [1 ]
机构
[1] Karpagam Coll Engn, Dept Informat Technol, Coimbatore 641021, Tamil Nadu, India
关键词
Deep learning; multimodal sentiment analysis; natural language processing; feature extraction; artificial intelligence; FRAMEWORK;
D O I
10.1142/S0219622018500128
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Contemporary research in Multimodal Sentiment Analysis (MSA) using deep learning is becoming popular in Natural Language Processing. Enormous amount of data are obtainable from social media such as Facebook, WhatsApp, YouTube, Twitter and microblogs every day. In order to deal with these large multimodal data, it is difficult to identify the relevant information from social media websites. Hence, there is a need to improve an intellectual MSA. Here, Deep Learning is used to improve the understanding and performance of MSA better. Deep Learning delivers automatic feature extraction and supports to achieve the best performance to enhance the combined model that integrates Linguistic, Acoustic and Video information extraction method. This paper focuses on the various techniques used for classifying the given portion of natural language text, audio and video according to the thoughts, feelings or p4 opinions expressed in it, i.e., whether the general attitude is Neutral, Positive or Negative. From the results, it is perceived that Deep Learning classification algorithm gives better results o compared to other machine learning classifiers such as KNN, Naive Bayes, Random Forest, Random Tree and Neural Net model. The proposed MSA in deep learning is to identify o sentiment in web videos which conduct the poof-of-concept experiments that proved, in preliminary experiments using the ICT-YouTube dataset, our proposed multimodal system achieves an accuracy of 96.07%.
引用
收藏
页码:883 / 910
页数:28
相关论文
共 57 条
[11]  
[Anonymous], 2013, Proceedings of the 21st ACM International Conference on Multimedia, DOI DOI 10.1145/2502081.2502224
[12]  
[Anonymous], P 4 ANN ACM BANG C
[13]  
[Anonymous], 2011, P 13 INT C MULT INT
[14]  
[Anonymous], 2005, Proceedings of the ACM international conference on world wide web
[15]  
[Anonymous], ARXIV13013781
[16]  
[Anonymous], TREE665 PURD U LAF I
[17]  
[Anonymous], 2006, DAT ENG WORKSH 2006
[18]  
[Anonymous], 2015 IEEE INT C MULT
[19]  
[Anonymous], ARXIV14085882
[20]  
[Anonymous], 2015, P 23 ACM INT C MULT