Generating Word Embeddings from an Extreme Learning Machine for Sentiment Analysis and Sequence Labeling Tasks

被引:38
作者
Lauren, Paula [1 ]
Qu, Guangzhi [1 ]
Yang, Jucheng [2 ]
Watta, Paul [3 ]
Huang, Guang-Bin [4 ]
Lendasse, Amaury [5 ]
机构
[1] Oakland Univ, Dept Comp Sci & Engn, Rochester, MI 48309 USA
[2] Tianjin Univ Sci & Technol, Coll Comp Sci & Informat Engn, Tianjin, Peoples R China
[3] Univ Michigan, Dept Elect & Comp Engn, Dearborn, MI 48128 USA
[4] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore, Singapore
[5] Univ Iowa, Dept Ind & Syst Engn, Iowa City, IA USA
基金
中国国家自然科学基金;
关键词
Word embeddings; Extreme learning machine (ELM); Word2Vec; Global vectors (GloVe); Text categorization; Sentiment analysis; Sequence labeling; REPRESENTATIONS;
D O I
10.1007/s12559-018-9548-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Word Embeddings are low-dimensional distributed representations that encompass a set of language modeling and feature learning techniques from Natural Language Processing (NLP). Words or phrases from the vocabulary are mapped to vectors of real numbers in a low-dimensional space. In previous work, we proposed using an Extreme Learning Machine (ELM) for generating word embeddings. In this research, we apply the ELM-based Word Embeddings to the NLP task of Text Categorization, specifically Sentiment Analysis and Sequence Labeling. The ELM-based Word Embeddings utilizes a count-based approach similar to the Global Vectors (GloVe) model, where the word-context matrix is computed then matrix factorization is applied. A comparative study is done with Word2Vec and GloVe, which are the two popular state-of-the-art models. The results show that ELM-based Word Embeddings slightly outperforms the aforementioned two methods in the Sentiment Analysis and Sequence Labeling tasks.In addition, only one hyperparameter is needed using ELM whereas several are utilized for the other methods. ELM-based Word Embeddings are comparable to the state-of-the-art methods: Word2Vec and GloVe models. In addition, the count-based ELM model have word similarities to both the count-based GloVe and the predict-based Word2Vec models, with subtle differences.
引用
收藏
页码:625 / 638
页数:14
相关论文
共 54 条
[1]  
Alpaydin E, 2014, ADAPT COMPUT MACH LE, P1
[2]  
[Anonymous], 2004, Proceedings of the 42nd annual meeting on Association for Computational Linguistics, DOI DOI 10.3115/1218955.1218990
[3]  
[Anonymous], 2006, Introduction to Data Mining
[4]  
[Anonymous], 2009, Advances in neural information processing systems
[5]  
[Anonymous], 2013, Bilingual word embeddings for phrasebased machine translation
[6]  
[Anonymous], 2013, Advances in Neural Information Processing Systems
[7]  
[Anonymous], 2001, Applied Multivariate Data Analysis
[8]  
Baldi P., 2012, P ICML WORKSH UNS TR, V27, P37, DOI DOI 10.1561/2200000006
[9]  
Bottou L., 1998, On-line learning in neural networks, P9
[10]   Extracting semantic representations from word co-occurrence statistics: A computational study [J].
Bullinaria, John A. ;
Levy, Joseph P. .
BEHAVIOR RESEARCH METHODS, 2007, 39 (03) :510-526