Logic Explained Networks

被引:22
作者
Ciravegna, Gabriele [1 ,2 ,4 ]
Barbiero, Pietro [3 ]
Giannini, Francesco [2 ]
Gori, Marco [2 ,4 ]
Lio, Pietro [3 ]
Maggini, Marco [2 ]
Melacci, Stefano [2 ]
机构
[1] Univ Florence, Dept Informat Engn, Florence, Italy
[2] Univ Siena, Dept Informat Engn & Math, Siena, Italy
[3] Univ Cambridge, Dept Comp Sci & Technol, Cambridge, England
[4] Univ Cote Azur, Maasai, Inria, CNRS,I3S, Nice, France
关键词
Explainable AI; Neural networks; Logic Explained Networks; BLACK-BOX; SELECTION; MODELS;
D O I
10.1016/j.artint.2022.103822
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The large and still increasing popularity of deep learning clashes with a major limit of neural network architectures, that consists in their lack of capability in providing human-understandable motivations of their decisions. In situations in which the machine is expected to support the decision of human experts, providing a comprehensible explanation is a feature of crucial importance. The language used to communicate the explanations must be formal enough to be implementable in a machine and friendly enough to be understandable by a wide audience. In this paper, we propose a general approach to Explainable Artificial Intelligence in the case of neural architectures, showing how a mindful design of the networks leads to a family of interpretable deep learning models called Logic Explained Networks (LENs). LENs only require their inputs to be human-understandable predicates, and they provide explanations in terms of simple First -Order Logic (FOL) formulas involving such predicates. LENs are general enough to cover a large number of scenarios. Amongst them, we consider the case in which LENs are directly used as special classifiers with the capability of being explainable, or when they act as additional networks with the role of creating the conditions for making a black-box classifier explainable by FOL formulas. Despite supervised learning problems are mostly emphasized, we also show that LENs can learn and provide explanations in unsupervised learning settings. Experimental results on several datasets and tasks show that LENs may yield better classifications than established white-box models, such as decision trees and Bayesian rule lists, while providing more compact and meaningful explanations.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:30
相关论文
共 109 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Agarwal R, 2021, Arxiv, DOI arXiv:2004.13912
  • [3] Ahmad MA, 2018, ACM-BCB'18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, P559, DOI [10.1145/3233547.3233667, 10.1109/ICHI.2018.00095]
  • [4] Angelino E., 2018, arXiv, DOI 10.48550/arXiv.1704.01701
  • [5] Aristotle, POSTERIOR ANAL
  • [6] Bach S.H., 2015, ARXIV
  • [7] Barbiero P, 2021, Arxiv, DOI arXiv:2105.11697
  • [8] Constraint acquisition
    Bessiere, Christian
    Koriche, Frederic
    Lazaar, Nadjib
    O'Sullivan, Barry
    [J]. ARTIFICIAL INTELLIGENCE, 2017, 244 : 315 - 342
  • [9] SmcHD1, containing a structural-maintenance-of-chromosomes hinge domain, has a critical role in X inactivation
    Blewitt, Marnie E.
    Gendrel, Anne-Valerie
    Pang, Zhenyi
    Sparrow, Duncan B.
    Whitelaw, Nadia
    Craig, Jeffrey M.
    Apedaile, Anwyn
    Hilton, Douglas J.
    Dunwoodie, Sally L.
    Brockdorff, Neil
    Kay, Graham F.
    Whitelaw, Emma
    [J]. NATURE GENETICS, 2008, 40 (05) : 663 - 669
  • [10] Brundage M, 2020, Arxiv, DOI arXiv:2004.07213