End-to-end Knowledge Triplet Extraction Combined with Adversarial Training

被引:0
作者
Huang P. [1 ]
Zhao X. [1 ,2 ]
Fang Y. [1 ]
Zhu H. [1 ,3 ]
Xiao W. [1 ,2 ]
机构
[1] Science and Technology on Information Systems Engineering Laboratory, National University of Defense Technology, Changsha
[2] Collaborative Innovation Center of Geospatial Technology(Wuhan University), Wuhan
[3] College of Economics and Trade, Changsha Commerce & Tourism College, Changsha
来源
Jisuanji Yanjiu yu Fazhan/Computer Research and Development | 2019年 / 56卷 / 12期
基金
中国国家自然科学基金;
关键词
Adversarial training; End-to-end network; Knowledge graph; Knowledge triplet extraction; Tagging scheme;
D O I
10.7544/issn1000-1239.2019.20190640
中图分类号
学科分类号
摘要
As a system to effectively represent the real world, knowledge graph has been widely concerned by academia and industry, and its ability to accurately represent knowledge is widely used in upper applications such as information service, intelligent search, and automatic question answering. A fact (knowledge) in form of triplet (head_entity, relation, tail_entity), is the basic unit of knowledge graph. Since facts in existing knowledge graphs are far from enough to describe the real world, acquiring more knowledge for knowledge graph completion and construction appears to be crucial. This paper investigates the problem of knowledge triplet extraction in the task of knowledge acquisition. This paper proposes an end-to-end knowledge triplet extraction method combined with adversarial training. Traditional techniques, whether pipeline or joint extraction, failed to discover the link between two subtasks of named entity recognition and relation extraction, which led to error propagation and worse extraction effectiveness. To overcome these flaws, in this paper, we adopt an entity and relation joint tagging strategy, and leverage an end-to-end framework to automatically tag the text and classify the tagging results. In addition, self-attention mechanism is added to assist the encoding of text, an objective function with bias term is additionally introduced to increase the attention of relevant entities, and the adversarial training is utilized to improve the robustness of the model. In experiments, we evaluate the proposed knowledge triplet extraction model via three evaluation metrics and analyze the experiments in four aspects. The experimental results verify that our model outperforms other state-of-the-art alternatives on knowledge triplet extraction. © 2019, Science Press. All right reserved.
引用
收藏
页码:2536 / 2548
页数:12
相关论文
共 41 条
[1]  
Liu Q., Li Y., Duan H., Et al., Knowledge graph construction techniques, Journal of Computer Research and Development, 53, 3, pp. 582-600, (2016)
[2]  
Miller G.A., WordNet: A lexical database for English, Communications of the ACM, 38, 11, pp. 39-41, (1995)
[3]  
Wang H., Technology of large scale knowledge graph, Communications of the CCF, 10, 3, pp. 64-68, (2014)
[4]  
Nguyen T.H., Grishman R., Relation extraction: Perspective from convolutional neural networks, Proc of the 1st Workshop on Vector Space Modeling for Natural Language Processing (VS@NAACL-HLT 2015), pp. 39-48, (2015)
[5]  
Nadeau D., Sekine S., A survey of named entity recognition and classification, Lingvisticae Investigationes, 30, 1, pp. 3-26, (2007)
[6]  
Rink B., Harabagiu S., Utd: Classifying semantic relations by combining lexical and semantic resources, Proc of the 5th Int Workshop on Semantic Evaluation (SemEval@ACL 2010), pp. 256-259, (2010)
[7]  
Li Q., Ji H., Incremental joint extraction of entity mentions and relations, Proc of the 52nd Annual Meeting of the Association for Computational Linguistics, pp. 402-412, (2014)
[8]  
Miwa M., Bansal M., End-to-End relation extraction using LSTMs on sequences and tree structures, Proc of the 54th Annual Meeting of the Association for Computational Linguistics, pp. 1105-1116, (2016)
[9]  
Szegedy C., Zaremba W., Sutskever I., Et al., Intriguing properties of neural networks, Proc of the 2nd Int Conf on Learning Representations (ICLR), (2014)
[10]  
Goodfellow I.J., Shlens J., Szegedy C., Explaining and harnessing adversarial examples, Proc of the 3rd Int Conf on Learning Representations (ICLR), (2015)