Coalition Situational Understanding via Explainable Neuro-Symbolic Reasoning and Learning

被引:0
作者
Preece, Alun [1 ]
Braines, Dave [1 ,2 ]
Cerutti, Federico [1 ,3 ]
Furby, Jack [1 ]
Hiley, Liam [1 ]
Kaplan, Lance [4 ]
Law, Mark [5 ]
Russo, Alessandra [5 ]
Srivastava, Mani [6 ]
Vilamala, Marc Roig [1 ]
Xing, Tianwei [6 ]
机构
[1] Cardiff Univ, Cardiff, Wales
[2] IBM Res Europe, Warrington, Cheshire, England
[3] Univ Brescia, Brescia, Italy
[4] DEVCOM Army Res Lab, Adelphi, MD USA
[5] Imperial Coll London, London, England
[6] Univ Calif Los Angeles, Los Angeles, CA 90024 USA
来源
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS III | 2021年 / 11746卷
关键词
situational understanding; coalition; artificial intelligence; machine learning; machine reasoning; explainability;
D O I
10.1117/12.2587850
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Achieving coalition situational understanding (CSU) involves both insight, i.e., recognising existing situations, and foresight, i.e., learning and reasoning to draw inferences about those situations, exploiting assets from across a coalition, including sensor feeds of various modalities, and analytic services. Recent years have seen significant advances in artificial intelligence (AI) and machine learning (ML) technologies applicable to CSU. However, state-of-the-art ML techniques based on deep neural networks require large volumes of training data; unfortunately, representative training examples of situations of interest in CSU are usually sparse. Moreover, to be useful, ML-based analytic services cannot be 'black boxes;' they must be capable of explaining their outputs. In this paper we describe an integrated CSU architecture that combines deep neural networks with symbolic learning and reasoning to address the problem of sparse training data. We also demonstrate how explainability can be achieved for deep neural networks operating on multimodal sensor feeds. We also show how the combined neuro-symbolic system achieves a layered approach to explainability. The work focuses on real-time decision making settings at the tactical edge, with both the symbolic and neural network parts of the system-including the explainabilty approaches-able to deal with temporal features.
引用
收藏
页数:12
相关论文
共 23 条
[1]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[2]  
Barrett-Powell K., 2020, P AAAI FSS 20 ART IN
[3]  
Braines D., 2020, P AAAI FSS 20 ART IN
[4]  
Dostal B., 2007, Army Transformation Taking Shape: Interim Brigade Combat Team Newsletter
[5]  
Hershey S, 2017, INT CONF ACOUST SPEE, P131, DOI 10.1109/ICASSP.2017.7952132
[6]  
Kowalski Robert, 1989, Foundations of knowledge base management, P23, DOI [DOI 10.1007/978-3-642-83397-72, DOI 10.1007/978-3-642-83397-7_2]
[7]  
Law M, 2020, AAAI CONF ARTIF INTE, V34, P2877
[8]   Neural probabilistic logic programming in DeepProbLog [J].
Manhaeve, Robin ;
Dumancic, Sebastijan ;
Kimmig, Angelika ;
Demeester, Thomas ;
De Raedt, Luc .
ARTIFICIAL INTELLIGENCE, 2021, 298
[9]  
MUGGLETON S, 1990, NEW GENERAT COMPUT, V8, P295
[10]  
Preece, 2018, ARXIV PREPRINT ARXIV