A Practical Tutorial on Explainable AI Techniques

被引:13
作者
Bennetot, Adrien [1 ]
Donadello, Ivan [2 ]
Haouari, Ayoub el qadi el [1 ,3 ]
Dragoni, Mauro [4 ]
Frossard, Thomas [3 ]
Wagner, Benedikt [5 ]
Sarranti, Anna [6 ]
Tulli, Silvia [1 ]
Trocan, Maria [7 ]
Chatila, Raja [1 ]
Holzinger, Andreas [6 ,8 ]
Garcez, Artur d'avila [5 ]
Diaz-rodriguez, Natalia [9 ]
机构
[1] Sorbonne Univ, Paris, Ile De France, France
[2] Free Univ Bozen Bolzano, Bolzano, Italy
[3] Tinubu Sq, Paris, France
[4] Fdn Bruno Kessler, Trento, Italy
[5] City Univ London, London, England
[6] Univ Nat Resources & Life Sci, Vienna, Austria
[7] Inst Super Elect Paris ISEP, Paris, France
[8] Med Univ Graz, Inst Med Informat, Graz, Austria
[9] Univ Granada, Granada, Andalucia, Spain
基金
奥地利科学基金会;
关键词
Explainable artificial intelligence; machine learning; deep learning; interpretability; shapley; Grad-CAM; layer-wise relevance propagation; DiCE; counterfactual explanations; TS4NLE; neural-symbolic learning; CLASSIFICATION; EXPLANATIONS; LANGUAGE;
D O I
10.1145/3670685
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The past years have been characterized by an upsurge in opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although DNNs have great generalization and prediction abilities, it is difficult to obtain detailed explanations for their behavior. As opaque Machine Learning models are increasingly being employed to make important predictions in critical domains, there is a danger of creating and using decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing DNNs with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency, and fairness. This guide is intended to be the go-to handbook for anyone with a computer science background aiming to obtain an intuitive insight from Machine Learning models accompanied by explanations out-of-the-box. The article aims to rectify the lack of a practical XAI guide by applying XAI techniques, in particular, day-to-day models, datasets and use-cases. In each chapter, the reader will find a description of the proposed method as well as one or several examples of use with Python notebooks. These can be easily modified to be applied to specific applications. We also explain what the prerequisites are for using each technique, what the user will learn about them, and which tasks they are aimed at.
引用
收藏
页数:44
相关论文
共 108 条
[21]   Companies Committed to Responsible AI: From Principles towards Implementation and Regulation? [J].
de Laat P.B. .
Philosophy & Technology, 2021, 34 (4) :1135-1193
[22]   On generating trustworthy counterfactual explanations [J].
Del Ser, Javier ;
Barredo-Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Herrera, Francisco ;
Saranti, Anna ;
Holzinger, Andreas .
INFORMATION SCIENCES, 2024, 655
[23]   Tailored motivational message generation: A model and practical framework for real-time physical activity coaching [J].
den Akker, Harm Op ;
Cabrita, Miriam ;
den Akker, Rieks Op ;
Jones, Valerie M. ;
Hermens, Hermie J. .
JOURNAL OF BIOMEDICAL INFORMATICS, 2015, 55 :104-115
[24]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[25]   Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation [J].
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Coeckelbergh, Mark ;
de Prado, Marcos Lopez ;
Herrera-Viedma, Enrique ;
Herrera, Francisco .
INFORMATION FUSION, 2023, 99
[26]   Gender and sex bias in COVID-19 epidemiological data through the lens of causality [J].
Diaz-Rodriguez, Natalia ;
Binkyte, Ruta ;
Bakkali, Wafae ;
Bookseller, Sannidhi ;
Tubaro, Paola ;
Bacevicius, Andrius ;
Zhioua, Sami ;
Chatila, Raja .
INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (03)
[27]   EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: The MonuMAI cultural heritage use case [J].
Diaz-Rodriguez, Natalia ;
Lamas, Alberto ;
Sanchez, Jules ;
Franchi, Gianni ;
Donadello, Ivan ;
Tabik, Siham ;
Filliat, David ;
Cruz, Policarpo ;
Montes, Rosana ;
Herrera, Francisco .
INFORMATION FUSION, 2022, 79 :58-83
[28]  
Diaz-Rodriguez Natalia, 2020, ADJUNCT PUBLICATION, P317, DOI DOI 10.1145/3386392.3399276
[29]  
Donadello Ivan, 2019, CEUR WORKSHOP P, V2465, P46
[30]  
Doran D., 2017, ARXIV171000794