Is neuro-symbolic AI meeting its promises in natural language processing? A structured review

被引:16
作者
Hamilton, Kyle [1 ]
Nayak, Aparna [1 ]
Bozic, Bojan [1 ]
Longo, Luca [1 ]
机构
[1] Technol Univ Dublin, SFI Ctr Res Training Machine Learning, Sch Comp Sci, Dublin, Ireland
基金
爱尔兰科学基金会;
关键词
Neuro-symbolic artificial intelligence; natural language processing; deep learning; knowledge representation & reasoning; structured review; KNOWLEDGE; REPRESENTATION; SYSTEMS; NETWORK; RULES;
D O I
10.3233/SW-223228
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Advocates for Neuro-Symbolic Artificial Intelligence (NeSy) assert that combining deep learning with symbolic reasoning will lead to stronger AI than either paradigm on its own. As successful as deep learning has been, it is generally accepted that even our best deep learning systems are not very good at abstract reasoning. And since reasoning is inextricably linked to language, it makes intuitive sense that Natural Language Processing (NLP), would be a particularly well-suited candidate for NeSy. We conduct a structured review of studies implementing NeSy for NLP, with the aim of answering the question of whether NeSy is indeed meeting its promises: reasoning, out-of-distribution generalization, interpretability, learning and reasoning from small data, and transferability to new domains. We examine the impact of knowledge representation, such as rules and semantic networks, language structure and relational structure, and whether implicit or explicit reasoning contributes to higher promise scores. We find that systems where logic is compiled into the neural network lead to the most NeSy goals being satisfied, while other factors such as knowledge representation, or type of neural architecture do not exhibit a clear correlation with goals being met. We find many discrepancies in how reasoning is defined, specifically in relation to human level reasoning, which impact decisions about model architectures and drive conclusions which are not always consistent across studies. Hence we advocate for a more methodical approach to the application of theories of human reasoning as well as the development of appropriate benchmarks, which we hope can lead to a better understanding of progress in the field. We make our data and code available on github for further analysis.1
引用
收藏
页码:1265 / 1306
页数:42
相关论文
共 168 条
[1]  
Altszyler E., 2021, Proceedings of the 15th International Workshop on Neural-Symbolic Learning and Reasoning as Part of the 1st International Joint Conference on Learning & Reasoning (IJCLR 2021), Virtual Conference, October 25-27, 2021,, P57
[2]   Cases without borders: Automating Knowledge Acquisition Approach using Deep Autoencoders and Siamese Networks in Case-based Reasoning [J].
Amin, Kareem ;
Kapetanakis, Stelios ;
Althoff, Klaus-Dieter ;
Dengel, Andreas ;
Petridis, Miltos .
2019 IEEE 31ST INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2019), 2019, :133-140
[3]  
Angeli G., 2014, P 2014 C EMP METH NA, P534, DOI DOI 10.3115/V1/D14-1059
[4]  
[Anonymous], 2004, Knowledge representation and reasoning
[5]   Causal Relation Classification using Convolutional Neural Networks and Grammar Tags [J].
Ayyanar, Raja ;
Ramasangu, Hariharan ;
Koomullil, George .
2019 IEEE 16TH INDIA COUNCIL INTERNATIONAL CONFERENCE (IEEE INDICON 2019), 2019,
[6]  
Brown TB, 2020, Arxiv, DOI [arXiv:2005.14165, DOI 10.48550/ARXIV.2005.14165]
[7]  
Bader S., 2005, We Will Show Them: Essays in Honour of Dov Gabbay, V1, P167
[8]  
Belle Vaishak, 2020, Scalable Uncertainty Management. 14th International Conference, SUM 2020. Proceedings. Lecture Notes in Artificial Intelligence Subseries of Lecture Notes in Computer Science (LNAI 12322), P3, DOI 10.1007/978-3-030-58449-8_1
[9]  
Bench - Capon T.J. M., 1990, KNOWLEDGE REPRESENTA
[10]   On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [J].
Bender, Emily M. ;
Gebru, Timnit ;
McMillan-Major, Angelina ;
Shmitchell, Shmargaret .
PROCEEDINGS OF THE 2021 ACM CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, FACCT 2021, 2021, :610-623