Quod erat demonstrandum?- Towards a typology of the concept of explanation for the design of explainable AI

被引:47
作者
Cabitza, Federico [1 ,2 ]
Campagner, Andrea [1 ]
Malgieri, Gianclaudio [3 ,4 ]
Natali, Chiara [1 ]
Schneeberger, David [5 ]
Stoeger, Karl [5 ]
Holzinger, Andreas [6 ]
机构
[1] Univ Milano Bicocca, DISCo, viale Sarca 336, I-20126 Milan, Italy
[2] IRCCS Orthoped Inst Galeazzi, via Galeazzi, 4, I-20161 Milan, Italy
[3] EDHEC Business Sch, Augmented Law Inst, 24 Ave Gustave Delory, CS 50411, F-59057 Roubaix 1, France
[4] Leiden Univ, eLaw, Rapenburg 70, NL-2311 Leiden, EZ, Netherlands
[5] Univ Vienna, Schottenbastei 10-16, A-1010 Vienna, Austria
[6] Univ Nat Resources & Life Sci Vienna, Peter Jordan Str 82, A-1190 Vienna, Austria
基金
奥地利科学基金会;
关键词
Explainable AI; XAI; Explanations; Taxonomy; Artificial intelligence; Machine learning; AUTOMATED DECISION-MAKING; BLACK-BOX; MACHINE; QUALITY;
D O I
10.1016/j.eswa.2022.118888
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a fundamental framework for defining different types of explanations of AI systems and the criteria for evaluating their quality. Starting from a structural view of how explanations can be constructed, i.e., in terms of an explanandum (what needs to be explained), multiple explanantia (explanations, clues, or parts of information that explain), and a relationship linking explanandum and explanantia, we propose an explanandum-based typology and point to other possible typologies based on how explanantia are presented and how they relate to explanandia. We also highlight two broad and complementary perspectives for defining possible quality criteria for assessing explainability: epistemological and psychological (cognitive). These definition attempts aim to support the three main functions that we believe should attract the interest and further research of XAI scholars: clear inventories, clear verification criteria, and clear validation methods.
引用
收藏
页数:16
相关论文
共 119 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Adhikari A, 2019, IEEE INT FUZZY SYST
  • [3] Permutation importance: a corrected feature importance measure
    Altmann, Andre
    Tolosi, Laura
    Sander, Oliver
    Lengauer, Thomas
    [J]. BIOINFORMATICS, 2010, 26 (10) : 1340 - 1347
  • [4] Explainable artificial intelligence: an analytical review
    Angelov, Plamen P.
    Soares, Eduardo A.
    Jiang, Richard
    Arnold, Nicholas I.
    Atkinson, Peter M.
    [J]. WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (05)
  • [5] Fair and Adequate Explanations
    Asher, Nicholas
    Paul, Soumya
    Russell, Chris
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION (CD-MAKE 2021), 2021, 12844 : 79 - 97
  • [6] Baehrens D, 2010, J MACH LEARN RES, V11, P1803
  • [7] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115
  • [8] PROTOTYPE SELECTION FOR INTERPRETABLE CLASSIFICATION
    Bien, Jacob
    Tibshirani, Robert
    [J]. ANNALS OF APPLIED STATISTICS, 2011, 5 (04) : 2403 - 2424
  • [9] Machine Learning Explainability Through Comprehensible Decision Trees
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    [J]. MACHINE LEARNING AND KNOWLEDGE EXTRACTION, CD-MAKE 2019, 2019, 11713 : 15 - 26
  • [10] Bordt S., 2022, arXiv