The quest of parsimonious XAI: A human-agent architecture for explanation formulation

被引:21
作者
Mualla, Yazan [1 ]
Tchappi, Igor [1 ,2 ,3 ]
Kampik, Timotheus [4 ]
Najjar, Amro [5 ]
Calvaresi, Davide [6 ]
Abbas-Turki, Abdeljalil [1 ]
Galland, Stephane [1 ]
Nicolle, Christophe [7 ]
机构
[1] Univ Bourgogne Franche Comte, CIAD UMR 7533, UTBM, F-90010 Belfort, France
[2] Orange Lab, 6 Ave Albert Durand, F-31700 Blagnac, France
[3] Univ Ngaoundere, Fac Sci, Ngaoundere 454, Cameroon
[4] Umea Univ, Dept Comp Sci, S-90187 Umea, Sweden
[5] Univ Luxembourg, AIRobolab ICR, Comp Sci & Communicat, L-4365 Luxembourg, Luxembourg
[6] Univ Appl Sci & Arts Western Switzerland, Sierre, Switzerland
[7] Univ Bourgogne Franche Comte, CIAD UMR 7533, F-21000 Dijon, France
基金
瑞士国家科学基金会;
关键词
Explainable artificial intelligence; Human-computer interaction; Multi-agent systems; Empirical user studies; Statistical testing; SIMULATION;
D O I
10.1016/j.artint.2021.103573
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the widespread use of Artificial Intelligence (AI), understanding the behavior of intelligent agents and robots is crucial to guarantee successful human-agent collaboration since it is not straightforward for humans to understand an agent's state of mind. Recent empirical studies have confirmed that explaining a system's behavior to human users fosters the latter's acceptance of the system. However, providing overwhelming or unnecessary information may also confuse the users and cause failure. For these reasons, parsimony has been outlined as one of the key features allowing successful human-agent interaction with parsimonious explanation defined as the simplest explanation (i.e. least complex) that describes the situation adequately (i.e. descriptive adequacy). While parsimony is receiving growing attention in the literature, most of the works are carried out on the conceptual front. This paper proposes a mechanism for parsimonious eXplainable AI (XAI). In particular, it introduces the process of explanation formulation and proposes HAExA, a human-agent explainability architecture allowing to make it operational for remote robots. To provide parsimonious explanations, HAExA relies on both contrastive explanations and explanation filtering. To evaluate the proposed architecture, several research hypotheses are investigated in an empirical user study that relies on well-established XAI metrics to estimate how trustworthy and satisfactory the explanations provided by HAExA are. The results are analyzed using parametric and non-parametric statistical testing. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:26
相关论文
共 99 条
[91]  
SWARTOUT W, 1991, IEEE EXPERT, V6, P58, DOI 10.1109/64.87686
[92]  
Sweller J, 2011, PSYCHOL LEARN MOTIV, V55, P37
[93]  
Szegedy C, 2014, Arxiv, DOI arXiv:1312.6199
[94]  
Thorburn William M., 1918, Mind, V27, P345, DOI [10.1093/mind/XXVII.3.345, DOI 10.1093/MIND/XXVII.3.345]
[95]  
Wang DD, 2019, CHI 2019: PROCEEDINGS OF THE 2019 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, DOI [10.1145/3290605.3300831, 10.1109/icocn.2019.8934212]
[96]  
Weiss Gerhard., 2013, Multiagent Systems. Intelligent Robotics and Autonomous Agents series, V2
[97]  
Winikoff M, 2017, AAMAS'17: PROCEEDINGS OF THE 16TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P251
[98]  
Wittgenstein Ludwig., 2001, TRACTATUS LOGICO PHI, Vrevised
[99]   INTELLIGENT AGENTS - THEORY AND PRACTICE [J].
WOOLDRIDGE, M ;
JENNINGS, NR .
KNOWLEDGE ENGINEERING REVIEW, 1995, 10 (02) :115-152