A Practical Tutorial on Explainable AI Techniques

被引:13
作者
Bennetot, Adrien [1 ]
Donadello, Ivan [2 ]
Haouari, Ayoub el qadi el [1 ,3 ]
Dragoni, Mauro [4 ]
Frossard, Thomas [3 ]
Wagner, Benedikt [5 ]
Sarranti, Anna [6 ]
Tulli, Silvia [1 ]
Trocan, Maria [7 ]
Chatila, Raja [1 ]
Holzinger, Andreas [6 ,8 ]
Garcez, Artur d'avila [5 ]
Diaz-rodriguez, Natalia [9 ]
机构
[1] Sorbonne Univ, Paris, Ile De France, France
[2] Free Univ Bozen Bolzano, Bolzano, Italy
[3] Tinubu Sq, Paris, France
[4] Fdn Bruno Kessler, Trento, Italy
[5] City Univ London, London, England
[6] Univ Nat Resources & Life Sci, Vienna, Austria
[7] Inst Super Elect Paris ISEP, Paris, France
[8] Med Univ Graz, Inst Med Informat, Graz, Austria
[9] Univ Granada, Granada, Andalucia, Spain
基金
奥地利科学基金会;
关键词
Explainable artificial intelligence; machine learning; deep learning; interpretability; shapley; Grad-CAM; layer-wise relevance propagation; DiCE; counterfactual explanations; TS4NLE; neural-symbolic learning; CLASSIFICATION; EXPLANATIONS; LANGUAGE;
D O I
10.1145/3670685
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for their behavior. As opaque Machine Learning models are increasingly being employed to make important predictions in critical domains, there is a danger of creating and using decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency, and fairness. This guide is intended to be the go-to handbook for anyone with a computer science background aiming to obtain an intuitive insight from Machine Learning models accompanied by explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI techniques, in particular, day-to-day models, datasets and use-cases. In each chapter, the reader will find a description of the proposed method as well as one or several examples of use with Python notebooks. These can be easily modified to be applied to specific applications. We also explain what the prerequisites are for using each technique, what the user will learn about them, and which tasks they are aimed at.
引用
收藏
页数:44
相关论文
共 108 条
[41]  
Holzinger Andreas, 2023, Machine Learning and Knowledge Extraction: 7th IFIP TC 5, TC 12, WG 8.4, WG 8.9, WG 12.9 International Cross-Domain Conference, CD-MAKE 2023, Proceedings. Lecture Notes in Computer Science (14065), P45, DOI 10.1007/978-3-031-40837-3_4
[42]   Information fusion as an integrative cross-cutting enabler to achieve robust, explainable, and trustworthy medical artificial intelligence [J].
Holzinger, Andreas ;
Dehmer, Matthias ;
Emmert-Streib, Frank ;
Cucchiara, Rita ;
Augenstein, Isabelle ;
Del Ser, Javier ;
Samek, Wojciech ;
Jurisica, Igor ;
Diaz-Rodriguez, Natalia .
INFORMATION FUSION, 2022, 79 :263-278
[43]   Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI [J].
Holzinger, Andreas ;
Malle, Bernd ;
Saranti, Anna ;
Pfeifer, Bastian .
INFORMATION FUSION, 2021, 71 :28-37
[44]   Classification by ordinal sums of conjunctive and disjunctive functions for explainable AI and interpretable machine learning solutions [J].
Hudec, Miroslav ;
Minarikova, Erika ;
Mesiar, Radko ;
Saranti, Anna ;
Holzinger, Andreas .
KNOWLEDGE-BASED SYSTEMS, 2021, 220
[45]   PLENARY: Explaining black-box models in natural language through fuzzy linguistic summaries [J].
Kaczmarek-Majer, Katarzyna ;
Casalino, Gabriella ;
Castellano, Giovanna ;
Dominiak, Monika ;
Hryniewicz, Olgierd ;
Kaminska, Olga ;
Vessio, Gennaro ;
Diaz-Rodriguez, Natalia .
INFORMATION SCIENCES, 2022, 614 :374-399
[46]  
Kazhdan D, 2023, Arxiv, DOI arXiv:2302.04899
[47]  
Kokhlikyan N, 2020, Arxiv, DOI [arXiv:2009.07896, 10.48550/arXiv.2009.07896]
[48]  
Krishna S, 2025, Arxiv, DOI arXiv:2202.01602
[49]  
Krizhevsky A., 2021, PROC INT C MACH LEAR, P1097
[50]   Unmasking Clever Hans predictors and assessing what machines really learn [J].
Lapuschkin, Sebastian ;
Waeldchen, Stephan ;
Binder, Alexander ;
Montavon, Gregoire ;
Samek, Wojciech ;
Mueller, Klaus-Robert .
NATURE COMMUNICATIONS, 2019, 10 (1)