Combine HowNet lexicon to train phrase recursive autoencoder for sentence-level sentiment analysis

被引:77
作者
Fu, Xianghua [1 ]
Liu, Wangwang [1 ]
Xu, Yingying [1 ]
Cui, Laizhong [1 ]
机构
[1] Shenzhen Univ, Coll Comp Sci & Software Engn, Shenzhen 518060, Guangdong, Peoples R China
基金
中国国家自然科学基金;
关键词
Sentiment analysis; Recursive autoencoder; HowNet lexicon; Phrase structure tree;
D O I
10.1016/j.neucom.2017.01.079
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Detecting sentiment of sentences in online reviews is still a challenging task. Traditional machine learning methods often use bag-of-words representations which cannot properly capture complex linguistic phenomena in sentiment analysis. Recently, recursive autoencoder (RAE) methods have been proposed for sentence-level sentiment analysis. They use word embedding to represent each word, and learn compositional vector representation of phrases and sentences with recursive autoencoders. Although RAE methods outperform other state-of-the-art sentiment prediction approaches on commonly used datasets, they tend to generate very deep parse trees, and need a large amount of labeled data for each node during the process of learning compositional vector representations. Furthermore, RAE methods mainly combine adjacent words in sequence with a greedy strategy, which make capturing semantic relations between distant words difficult. To solve these issues, we propose a semi-supervised method which combines HowNet lexicon to train phrase recursive autoencoders (we call it CHL-PRAE). CHL-PRAE constructs the phrase recursive autoencoder (PRAE) model at first. Then the model calculates the sentiment orientation of each node with the HowNet lexicon, which acts as sentiment labels, when we train the softmax classifier of PRAE. Furthermore, our CHL-PRAE model conducts bidirectional training to capture global information. Compared with RAE and some supervised methods such as support vector machine (SVM) and naive Bayesian on English and Chinese datasets, the experiment results show that CHL-PRAE can provide the best performance for sentence-level sentiment analysis. (C) 2017 Elsevier B.V. All rights reserved.
引用
收藏
页码:18 / 27
页数:10
相关论文
共 52 条
  • [1] [Anonymous], 2013, NIPS DEEP LEARN WORK
  • [2] [Anonymous], 2011, Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '11
  • [3] [Anonymous], 2010, Proceedings of the NIPS-2010 Deep Learning and Unsupervised Feature Learning Workshop, DOI DOI 10.1007/978-3-540-87479-9
  • [4] [Anonymous], 2016, Exploring the limits of language modeling
  • [5] [Anonymous], 2009, P 2009 C EMPIRICAL M, DOI 10.3115/1699648.1699700
  • [6] Barbosa L., 2010, INT C COMP LING, P36, DOI [DOI 10.1145/3167132.3167324, 10.1016/j.sedgeo.2006.07.004]
  • [7] Bifet A, 2010, LECT NOTES ARTIF INT, V6332, P1, DOI 10.1007/978-3-642-16184-1_1
  • [8] Bo P., 2008, Foundations and Trends in Information Retrieval, V2, P1, DOI DOI 10.1561/1500000011
  • [9] Bowman Samuel R., 2015, P 2015 INT C COGN CO, V1583, P37
  • [10] Cho K, 2014, ARXIV14061078, P1724, DOI [DOI 10.3115/V1/D14-1179, 10.3115/V1/D14-1179]