Boosting question answering over knowledge graph with reward integration and policy evaluation under weak supervision

被引:51
作者
Bi, Xin [1 ,2 ]
Nie, Haojie [3 ]
Zhang, Guoliang [1 ,2 ]
Hu, Lei [1 ,2 ]
Ma, Yuliang [4 ]
Zhao, Xiangguo [5 ]
Yuan, Ye [6 ]
Wang, Guoren [6 ]
机构
[1] Northeastern Univ, Key Lab, Minist Educ Safe Min Deep Met Mines, Shenyang 110819, Peoples R China
[2] Northeastern Univ, Key Lab Liaoning Prov Deep Engn & Intelligent Tech, Shenyang 110819, Peoples R China
[3] Northeastern Univ, Sch Comp Sci & Engn, Shenyang 110819, Peoples R China
[4] Northeastern Univ, Sch Business Adm, Shenyang 110819, Peoples R China
[5] Northeastern Univ, Coll Software, Shenyang 110819, Peoples R China
[6] Beijing Inst Technol, Sch Comp Sci & Technol, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Knowledge graph-based question answering; Multi-hop reasoning; Weak supervision; Augmented intelligence for decision-making;
D O I
10.1016/j.ipm.2022.103242
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Among existing knowledge graph based question answering (KGQA) methods, relation super-vision methods require labeled intermediate relations for stepwise reasoning. To avoid this enormous cost of labeling on large-scale knowledge graphs, weak supervision methods, which use only the answer entity to evaluate rewards as supervision, have been introduced. However, lacking intermediate supervision raises the issue of sparse rewards, which may result in two types of incorrect reasoning path: (1) incorrectly reasoned relations, even when the final answer entity may be correct; (2) correctly reasoned relations in a wrong order, which leads to an incorrect answer entity. To address these issues, this paper considers the multi-hop KGQA task as a Markov decision process, and proposes a model based on Reward Integration and Policy Evaluation (RIPE). In this model, an integrated reward function is designed to evaluate the reasoning process by leveraging both terminal and instant rewards. The intermediate supervision for each single reasoning hop is constructed with regard to both the fitness of the taken action and the evaluation of the unreasoned information remained in the updated question embeddings. In addition, to lead the agent to the answer entity along the correct reasoning path, an evaluation network is designed to evaluate the taken action in each hop. Extensive ablation studies and comparative experiments are conducted on four KGQA benchmark datasets. The results demonstrate that the proposed model outperforms the state-of-the-art approaches in terms of answering accuracy.
引用
收藏
页数:17
相关论文
共 50 条
[41]  
Vaswani A, 2017, ADV NEUR IN, V30
[42]  
Wang H, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P2623
[43]   Distributed Pregel-based provenance-aware regular path query processing on RDF knowledge graphs [J].
Wang, Xin ;
Wang, Simiao ;
Xin, Yueqi ;
Yang, Yajun ;
Li, Jianxin ;
Wang, Xiaofei .
WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2020, 23 (03) :1465-1496
[44]  
Weston J, 2015, Arxiv, DOI arXiv:1410.3916
[45]  
WILLIAMS RJ, 1992, MACH LEARN, V8, P229, DOI 10.1007/BF00992696
[46]  
Xiong WH, 2018, Arxiv, DOI arXiv:1707.06690
[47]  
Yih WT, 2015, PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1, P1321
[48]  
Zhang YY, 2018, AAAI CONF ARTIF INTE, P6069
[49]   Improving question answering over incomplete knowledge graphs with relation prediction [J].
Zhao, Fen ;
Li, Yinguo ;
Hou, Jie ;
Bai, Ling .
NEURAL COMPUTING & APPLICATIONS, 2022, 34 (08) :6331-6348
[50]  
Zhou Mantong, 2015, P 27 INT C COMPUTATI, P2010