Knowledge Base Question Answering Method Incorporating Fact Text

被引:0
|
作者
Wang G. [1 ,2 ]
He S. [3 ]
Liu K. [3 ]
Yu Z. [1 ,2 ]
Gao S. [1 ,2 ]
Guo J. [1 ,2 ]
机构
[1] Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming
[2] Yunnan Key Laboratory of Artificial Intelligence, Kunming University of Science and Technology, Kunming
[3] National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing
来源
Moshi Shibie yu Rengong Zhineng/Pattern Recognition and Artificial Intelligence | 2021年 / 34卷 / 03期
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Deep Learning; Knowledge Base; Question Answering System; Representation Learning;
D O I
10.16451/j.cnki.issn1003-6059.202103008
中图分类号
学科分类号
摘要
In natural language problems, the relationship expression in the knowledge base is diversified. Therefore, matching the answers of the knowledge base question and answer through representation learning is still a challenge. To make up the shortcomings, a knowledge base question answering method incorporating fact text is proposed. Entities, entity types and relationships in the knowledge base are converted into fact text. A pre-trained language model(BERT) is employed for representation. The vector of question and answers in low dimensional semantic space is obtained using the rich semantic mode of BERT. The answer with the closest semantic similarity to the question is matched by calculation. Experiments show that the proposed method is effective and robust in answering common simple questions. © 2021, Science Press. All right reserved.
引用
收藏
页码:267 / 274
页数:7
相关论文
共 28 条
  • [21] BORDES A, USUNIER N, CHOPRA S, Et al., Large-Scale Simple Question Answering with Memory Networks
  • [22] PENNIGNTON J, SOCHER R, MANNING C D., GloVe: Global Vectors for Word Representation, Proc of the Conference on Empirical Methods in Natural Language Processing, pp. 1532-1543, (2014)
  • [23] REIMERS N, GUREVYCH I., Sentence-BERT: Sentence Embe-ddings Using Siamese BERT-Networks, Proc of the Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, pp. 3982-3992, (2019)
  • [24] SCHROFF F, KALENICHENKO D, PHILBIN J., FaceNet: A Unified Embedding for Face Recognition and Clustering, Proc of the 28th IEEE Conference on Computer Vision and Pattern Recognition, pp. 815-823, (2015)
  • [25] MOHAMMED S, SHI P, LIN J., Strong Baselines for Simple Question Answering over Knowledge Graphs with and without Neural Networks
  • [26] KRIZHEVSKY A, SUTSKEVER H, HINTON G E, Et al., ImageNet Classification with Deep Convolutional Neural Networks, Proc of the 25th International Conference on Neural Information Processing Systems, I, pp. 1097-1105, (2012)
  • [27] BOWMAN S R, ANGELI G, POTTS C, Et al., A Large Annotated Corpus for Learning Natural Language Inference
  • [28] CER D, DIAB M, AGIRRE E, Et al., SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation, Proc of the 11th International Workshop on Semantic Evaluations, pp. 1-14, (2017)