A historical perspective of explainable Artificial Intelligence

被引:163
作者
Confalonieri, Roberto [1 ]
Coba, Ludovik [1 ]
Wagner, Benedikt [2 ]
Besold, Tarek R. [3 ]
机构
[1] Free Univ Bozen Bolzano, Fac Comp Sci, Dominikanerpl 3, I-39100 Bozen Bolzano, Italy
[2] City Univ London, Res Ctr Machine Learning, London, England
[3] Neurocat GmbH, Berlin, Germany
关键词
explainable AI; explainable recommender systems; interpretable machine learning; neural‐ symbolic reasoning; EXPLANATIONS; TAXONOMY; RULES; ONTOLOGIES; QUALITY; OBJECTS; MODELS;
D O I
10.1002/widm.1391
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explainability in Artificial Intelligence (AI) has been revived as a topic of active research by the need of conveying safety and trust to users in the "how" and "why" of automated decision-making in different applications such as autonomous driving, medical diagnosis, or banking and finance. While explainability in AI has recently received significant attention, the origins of this line of work go back several decades to when AI systems were mainly developed as (knowledge-based) expert systems. Since then, the definition, understanding, and implementation of explainability have been picked up in several lines of research work, namely, expert systems, machine learning, recommender systems, and in approaches to neural-symbolic learning and reasoning, mostly happening during different periods of AI history. In this article, we present a historical perspective of Explainable Artificial Intelligence. We discuss how explainability was mainly conceived in the past, how it is understood in the present and, how it might be understood in the future. We conclude the article by proposing criteria for explanations that we believe will play a crucial role in the development of human-understandable explainable systems. This article is categorized under: Fundamental Concepts of Data and Knowledge > Explainable AI Technologies > Artificial Intelligence
引用
收藏
页数:21
相关论文
共 106 条
[1]  
Alvarez-Melis D., 2017, ABS170701943 CORR
[2]   Survey and critique of techniques for extracting rules from trained artificial neural networks [J].
Andrews, R ;
Diederich, J ;
Tickle, AB .
KNOWLEDGE-BASED SYSTEMS, 1995, 8 (06) :373-389
[3]  
[Anonymous], 2016, ABS161204757 CORR
[4]  
[Anonymous], 2016, GEN DAT PROT REG
[5]  
Apley D. W., 2016, ABS161208468 CORR
[6]  
Balog K., 2020, P 43 INT ACM SIG C R
[7]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[8]   Reasoning in Non-probabilistic Uncertainty: Logic Programming and Neural-Symbolic Computing as Examples [J].
Besold, Tarek R. ;
Garcez, Artur d'Avila ;
Stenning, Keith ;
van der Torre, Leendert ;
van Lambalgen, Michiel .
MINDS AND MACHINES, 2017, 27 (01) :37-77
[9]   Explainable Machine Learning in Deployment [J].
Bhatt, Umang ;
Xiang, Alice ;
Sharma, Shubham ;
Weller, Adrian ;
Taly, Ankur ;
Jia, Yunhan ;
Ghosh, Joydeep ;
Puri, Ruchir ;
Moura, Jose M. F. ;
Eckersley, Peter .
FAT* '20: PROCEEDINGS OF THE 2020 CONFERENCE ON FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY, 2020, :648-657
[10]  
Bilgic Mustafa, 2005, P PERSONALIZATION 20, P13