EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case

被引:58
作者
Diaz-Rodriguez, Natalia [1 ,2 ,4 ,7 ]
Lamas, Alberto
Sanchez, Jules [1 ,2 ]
Franchi, Gianni [1 ,2 ]
Donadello, Ivan [3 ]
Tabik, Siham [4 ]
Filliat, David [1 ]
Cruz, Policarpo [6 ]
Montes, Rosana [8 ]
Herrera, Francisco [4 ,5 ,7 ]
机构
[1] Inst Polytech Paris, U2IS, ENSTA, F-91762 Palaiseau, France
[2] Inria Flowers, F-91762 Palaiseau, France
[3] Free Univ Bozen Bolzano, I-39100 Bolzano, Italy
[4] Univ Granada, DaSCI Andalusian Inst Data Sci & Computat Intelli, Granada 18071, Spain
[5] King Abdulaziz Univ, Fac Comp & Informat Technol, Jeddah 21589, Saudi Arabia
[6] Univ Granada, Dept Art Hist, Granada 18071, Spain
[7] Univ Granada, Dept Comp Sci & Artificial Intelligence, Granada 18071, Spain
[8] Univ Granada, Dept Software Engn, Granada 18071, Spain
关键词
Explainable artificial intelligence; Deep learning; Neural-symbolic learning; Expert knowledge graphs; Compositionality; Part-based object detection and classification; CLASSIFICATION; MODELS;
D O I
10.1016/j.inffus.2021.09.022
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The latest Deep Learning (DL) models for detection and classification have achieved an unprecedented performance over classical machine learning algorithms. However, DL models are black-box methods hard to debug, interpret, and certify. DL alone cannot provide explanations that can be validated by a non technical audience such as end-users or domain experts. In contrast, symbolic AI systems that convert concepts into rules or symbols - such as knowledge graphs - are easier to explain. However, they present lower generalization and scaling capabilities. A very important challenge is to fuse DL representations with expert knowledge. One way to address this challenge, as well as the performance-explainability trade-off is by leveraging the best of both streams without obviating domain expert knowledge. In this paper, we tackle such problem by considering the symbolic knowledge is expressed in form of a domain expert knowledge graph. We present the eXplainable Neural-symbolic learning (X-NeSyL) methodology, designed to learn both symbolic and deep representations, together with an explainability metric to assess the level of alignment of machine and human expert explanations. The ultimate objective is to fuse DL representations with expert domain knowledge during the learning process so it serves as a sound basis for explainability. In particular, X-NeSyL methodology involves the concrete use of two notions of explanation, both at inference and training time respectively: (1) EXPLANet: Expert-aligned eXplainable Part-based cLAssifier NETwork Architecture, a compositional convolutional neural network that makes use of symbolic representations, and (2) SHAP-Backprop, an explainable AI-informed training procedure that corrects and guides the DL process to align with such symbolic representations in form of knowledge graphs. We showcase X-NeSyL methodology using MonuMAI dataset for monument facade image classification, and demonstrate that with our approach, it is possible to improve explainability at the same time as performance.
引用
收藏
页码:58 / 83
页数:26
相关论文
共 100 条
[1]  
Adebayo J, 2018, ADV NEUR IN, V31
[2]  
Andreas Jacob, 2019, INT C LEARN REPR
[3]  
[Anonymous], 2015, PROC ADVNEURAL INF P
[4]  
[Anonymous], 2007, The Description Logic Handbook: Theory, Implementation and Applications
[5]  
Artur S, 2009, COGNITIVE TECHNOLOGI
[6]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[7]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[8]  
Bennetot A., 2019, NESY WORKSHOP IJCAI, P71
[9]  
Bernstein EJ, 2005, PROC CVPR IEEE, P734
[10]  
Besold T.R., 2017, Neural-symbolic learning and reasoning: A survey and interpretation