A Joint Neural Model for Fine-Grained Named Entity Classification of Wikipedia Articles

被引:18
作者
Suzuki, Masatoshi [1 ]
Matsuda, Koji [2 ]
Sekine, Satoshi [3 ,4 ]
Okazaki, Naoaki [5 ]
Inui, Kentaro [4 ]
机构
[1] Tohoku Univ, Sendai, Miyagi 9808579, Japan
[2] Tohoku Univ, Grad Sch Informat Sci, Sendai, Miyagi 9808579, Japan
[3] Language Craft Inc, Tokyo 1520031, Japan
[4] RIKEN Ctr Adv Intelligence Project, Tokyo 1030027, Japan
[5] Tokyo Inst Technol, Sch Comp, Dept Comp Sci, Tokyo 1528550, Japan
关键词
named entity classification; wikipedia; multi-task learning; neural network;
D O I
10.1587/transinf.2017SWP0005
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper addresses the task of assigning labels of fine-grained named entity (NE) types to Wikipedia articles. Information of NE types are useful when extracting knowledge of NEs from natural language text. It is common to apply an approach based on supervised machine learning to named entity classification. However, in a setting of classifying into fine-grained types, one big challenge is how to alleviate the data sparseness problem since one may obtain far fewer instances for each fine-grained types. To address this problem, we propose two methods. First, we introduce a multi-task learning framework, in which NE type classifiers are all jointly trained with a neural network. The neural network has a hidden layer, where we expect that effective combinations of input features are learned across different NE types. Second, we propose to extend the input feature set by exploiting the hyperlink structure of Wikipedia. While most of previous studies are focusing on engineering features from the articles' contents, we observe that the information of the contexts the article is mentioned can also be a useful clue for NE type classification. Concretely, we propose to learn article vectors (i.e. entity embeddings) from Wikipedia's hyperlink structure using a Skip-gram model. Then we incorporate the learned article vectors into the input feature set for NE type classification. To conduct large-scale practical experiments, we created a new dataset containing over 22,000 manually labeled articles. With the dataset, we empirically show that both of our ideas gained their own statistically significant improvement separately in classification accuracy. Moreover, we show that our proposed methods are particularly effective in labeling infrequent NE types. We've made the learned article vectors publicly available. The labeled dataset is available if one contacts the authors.
引用
收藏
页码:73 / 81
页数:9
相关论文
共 38 条
[1]  
Alotaibi F., 2013, P 6 INT JOINT C NAT, P392
[2]  
[Anonymous], 2015, P 2015 C EMP METH NA
[3]  
[Anonymous], 2012, Proceedings of the 26th Conference on Artificial Intelligence, DOI 10.1609/aaai.v26i1.8122
[4]  
[Anonymous], 2006, P WORKSH NEW TEXT WI
[5]  
[Anonymous], 8 PAC AS C KNOWL
[6]  
[Anonymous], 2012, P COLING 2012 POSTER
[7]  
Aprosio A. P., 2013, P INT C NLP DBPEDIA, P20
[8]   DBpedia: A nucleus for a web of open data [J].
Auer, Soeren ;
Bizer, Christian ;
Kobilarov, Georgi ;
Lehmann, Jens ;
Cyganiak, Richard ;
Ives, Zachary .
SEMANTIC WEB, PROCEEDINGS, 2007, 4825 :722-+
[9]  
Carlson A., 2009, P 2009 AAAI SPRING S, P158
[10]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75