Coupling Large Language Models with Logic Programming for Robust and General Reasoning from Text

被引:0
|
作者
Yang, Zhun
Ishay, Adam [1 ]
Lee, Joohyung [1 ,2 ]
机构
[1] Arizona State Univ, Tempe, AZ 85287 USA
[2] Samsung Res, Suwon, South Korea
来源
FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023 | 2023年
基金
美国国家科学基金会;
关键词
CALCULUS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While large language models (LLMs), such as GPT-3, appear to be robust and general, their reasoning ability is not at a level to compete with the best models trained for specific natural language reasoning problems. In this study, we observe that a large language model can serve as a highly effective few-shot semantic parser. It can convert natural language sentences into a logical form that serves as input for answer set programs, a logic-based declarative knowledge representation formalism. The combination results in a robust and general system that can handle multiple question-answering tasks without requiring retraining for each new task. It only needs a few examples to guide the LLM's adaptation to a specific task, along with reusable ASP knowledge modules that can be applied to multiple tasks. We demonstrate that this method achieves state-of-the-art performance on several NLP benchmarks, including bAbI, StepGame, CLUTRR, and gSCAN. Additionally, it successfully tackles robot planning tasks that an LLM alone fails to solve.
引用
收藏
页码:5186 / 5219
页数:34
相关论文
empty
未找到相关数据