The quest of parsimonious XAI: A human-agent architecture for explanation formulation

被引:21
作者
Mualla, Yazan [1 ]
Tchappi, Igor [1 ,2 ,3 ]
Kampik, Timotheus [4 ]
Najjar, Amro [5 ]
Calvaresi, Davide [6 ]
Abbas-Turki, Abdeljalil [1 ]
Galland, Stephane [1 ]
Nicolle, Christophe [7 ]
机构
[1] Univ Bourgogne Franche Comte, CIAD UMR 7533, UTBM, F-90010 Belfort, France
[2] Orange Lab, 6 Ave Albert Durand, F-31700 Blagnac, France
[3] Univ Ngaoundere, Fac Sci, Ngaoundere 454, Cameroon
[4] Umea Univ, Dept Comp Sci, S-90187 Umea, Sweden
[5] Univ Luxembourg, AIRobolab ICR, Comp Sci & Communicat, L-4365 Luxembourg, Luxembourg
[6] Univ Appl Sci & Arts Western Switzerland, Sierre, Switzerland
[7] Univ Bourgogne Franche Comte, CIAD UMR 7533, F-21000 Dijon, France
基金
瑞士国家科学基金会;
关键词
Explainable artificial intelligence; Human-computer interaction; Multi-agent systems; Empirical user studies; Statistical testing; SIMULATION;
D O I
10.1016/j.artint.2021.103573
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent's state of mind. Recent empirical studies have confirmed that explaining a system's behavior to human users fosters the latter's acceptance of the system. However, providing overwhelming or unnecessary information may also confuse the users and cause failure. For these reasons, parsimony has been outlined as one of the key features allowing successful human-agent interaction with parsimonious explanation defined as the simplest explanation (i.e. least complex) that describes the situation adequately (i.e. descriptive adequacy). While parsimony is receiving growing attention in the literature, most of the works are carried out on the conceptual front. This paper proposes a mechanism for parsimonious eXplainable AI (XAI). In particular, it introduces the process of explanation formulation and proposes HAExA, a human-agent explainability architecture allowing to make it operational for remote robots. To provide parsimonious explanations, HAExA relies on both contrastive explanations and explanation filtering. To evaluate the proposed architecture, several research hypotheses are investigated in an empirical user study that relies on well-established XAI metrics to estimate how trustworthy and satisfactory the explanations provided by HAExA are. The results are analyzed using parametric and non-parametric statistical testing. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:26
相关论文
共 99 条
[21]  
Broekens J, 2010, LECT NOTES ARTIF INT, V6251, P28, DOI 10.1007/978-3-642-16178-0_5
[22]   Explainable Multi-Agent Systems Through Blockchain Technology [J].
Calvaresi, Davide ;
Mualla, Yazan ;
Najjar, Amro ;
Galland, Stephane ;
Schumacher, Michael .
EXPLAINABLE, TRANSPARENT AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2019, 2019, 11763 :41-58
[23]   Contrastive Constraints Guide Explanation-Based Category Learning [J].
Chin-Parker, Seth ;
Cantelon, Julie .
COGNITIVE SCIENCE, 2017, 41 (06) :1645-1655
[24]  
Churchland P.M., 1989, PHILOS PERSPECT, V3, P225, DOI DOI 10.2307/2214269
[25]  
Clarke, 1986, PHILOS PAP, DOI [10.1007/978-1-4613-8625-4, DOI 10.1007/978-1-4613-8625-4]
[26]  
DeVellis RF., 2016, SCALE DEV THEORY APP, V26
[27]  
Dhurandhar A., 2017, ARXIV PREPRINT ARXIV
[28]   FOR THE RIGHT REASONS - THE FORR ARCHITECTURE FOR LEARNING IN A SKILL DOMAIN [J].
EPSTEIN, SL .
COGNITIVE SCIENCE, 1994, 18 (03) :479-511
[29]  
European Commission Content Directorate-General for Communications Networks and Technology, 2019, B-1049
[30]   A BDI-Based Methodology for Eliciting Tactical Decision-Making Expertise [J].
Evertsz, Rick ;
Thangarajah, John ;
Thanh Ly .
DATA AND DECISION SCIENCES IN ACTION, 2018, :13-26