FeQA: Fusion and enhancement of multi-source knowledge on question answering

被引:4
作者
Zhang, Jiahao [1 ]
Huang, Bo [1 ]
Fujita, Hamido [2 ,3 ,4 ,5 ]
Zeng, Guohui [1 ]
Liu, Jin [1 ]
机构
[1] Shanghai Univ Engn Sci, Sch Elect & Elect Engn, Shanghai 201600, Peoples R China
[2] HUTECH Univ, Fac Informat Technol, Ho Chi Minh City, Vietnam
[3] Univ Teknol Malaysia, Malaysia Japan Int Inst Technol MJIIT, Kuala Lumpur, Malaysia
[4] Univ Granada, Andalusian Res Inst Data Sci & Computat Intelligen, Granada, Spain
[5] Iwate Prefectural Univ, Iwate, Japan
基金
中国国家自然科学基金;
关键词
Question answering system; Semantic enhancement; Knowledge graph; Knowledge interaction;
D O I
10.1016/j.eswa.2023.120286
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the question answering task, we usually need to reason the answer according to the question. Question answering tasks mostly use the pretrained language model to obtain the semantic embedding of questions and choices to predict answers, while the pretrained language model cannot accurately represent the potential relationship between entities in the question. Therefore, researchers introduce knowledge graph to realize the reasoning from question entity to answer entity. However, the limitation of knowledge graph lies in the lack of background information of entities, which may lead to wrong reasoning. To solve the above problems, a new question answering system model FeQA is proposed, which adopts large-scale pretrained language model and knowledge graph. The former uses dual-attention mechanism to enhance the semantics of questions by using Wiktionary and other question answering datasets, while the latter uses graph neural network to infer entities. During the interaction of two modal knowledge, the former provides the basis for the reasoning of nodes in the latter, and the latter provides structured knowledge for the former. After several reasoning iterations, the final answer is obtained by using the knowledge of the two modes. The experimental results on the CommonsenseQA and OpenBookQA datasets show that the performance of this model is better than that of the baseline models. Ablation experiments show that the components and knowledge sources included in this model play an important role in the effect of question and answering task. Extended experiments show the model has good application capability.
引用
收藏
页数:10
相关论文
共 33 条
  • [1] Alon T., 2019, ABS181100937
  • [2] ReLMKG: reasoning with pre-trained language models and knowledge graphs for complex question answering
    Cao, Xing
    Liu, Yun
    [J]. APPLIED INTELLIGENCE, 2023, 53 (10) : 12032 - 12046
  • [3] Chen QL, 2020, Arxiv, DOI arXiv:2011.02705
  • [4] Fang YW, 2020, Arxiv, DOI arXiv:1911.03631
  • [5] Feng YL, 2020, Arxiv, DOI arXiv:2005.00646
  • [6] Huang Z., 2022, arXiv
  • [7] gMatch: Knowledge base question answering via semantic matching
    Jiao, Jie
    Wang, Shujun
    Zhang, Xiaowang
    Wang, Longbiao
    Feng, Zhiyong
    Wang, Junhu
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 228
  • [8] JIAO S, 2022, APPL INTELL, P1
  • [9] Accurate and prompt answering framework based on customer reviews and question-answer pairs
    Kim, Eun
    Yoon, Hyejung
    Lee, Jungeun
    Kim, Misuk
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 203
  • [10] Liu YH, 2019, Arxiv, DOI [arXiv:1907.11692, 10.48550/arXiv.1907.11692, DOI 10.48550/ARXIV.1907.11692]