What do we want from Explainable Artificial Intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research

被引:299
作者
Langer, Markus [1 ]
Oster, Daniel [2 ]
Speith, Timo [2 ,3 ]
Hermanns, Holger [3 ,4 ]
Kaestner, Lena [2 ]
Schmidt, Eva [5 ]
Sesing, Andreas [6 ]
Baum, Kevin [2 ,3 ]
机构
[1] Saarland Univ, Dept Psychol, Saarbrucken, Germany
[2] Saarland Univ, Inst Philosophy, Saarbrucken, Germany
[3] Saarland Univ, Dept Comp Sci, Saarbrucken, Germany
[4] Inst Intelligent Software, Guangzhou, Peoples R China
[5] Tech Univ Dortmund, Inst Philosophy & Polit Sci, Dortmund, Germany
[6] Saarland Univ, Inst Legal Informat, Saarbrucken, Germany
关键词
Explainable Artificial Intelligence; Explainability; Interpretability; Explanations; Understanding; Interdisciplinary Research; Human-Computer Interaction; BLACK-BOX; THEORETICAL FOUNDATIONS; DECISION-MAKING; EXPLANATIONS; AUTOMATION; KNOWLEDGE; FRAMEWORK; COGNITION; SYSTEM; TRUST;
D O I
10.1016/j.artint.2021.103473
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these "stakeholders' desiderata") in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:24
相关论文
共 209 条
[1]   Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda [J].
Abdul, Ashraf ;
Vermeulen, Jo ;
Wang, Danding ;
Lim, Brian ;
Kankanhalli, Mohan .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[2]   Taking reading comprehension exams on screen or on paper? A metacognitive analysis of learning texts under time pressure [J].
Ackerman, Rakefet ;
Lauterman, Tirza .
COMPUTERS IN HUMAN BEHAVIOR, 2012, 28 (05) :1816-1828
[3]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[4]  
Al-Ali AG, 2019, INT ICE CONF ENG, DOI [10.1109/ice.2019.8792597, 10.1145/3309194.3309195]
[5]   Explanations of Black-Box Model Predictions by Contextual Importance and Utility [J].
Anjomshoae, Sule ;
Framling, Kary ;
Najjar, Amro .
EXPLAINABLE, TRANSPARENT AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2019, 2019, 11763 :95-109
[6]  
Anjomshoae S, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P1078
[7]  
[Anonymous], 2017, UNDERSTANDING SCI UN
[8]  
[Anonymous], 2007, Explaining the brain
[9]  
[Anonymous], 2011, P 16 INT C INTELLIGE, DOI [10.1145/1943403.1943424, DOI 10.1145/1943403.1943424]
[10]  
[Anonymous], 1984, Rule Based Expert Systems: The Mycin Experiments of the Stanford Heuristic Programming Project (the Addison-Wesley Series in Artificial Intelligence)