What do we want from Explainable Artificial Intelligence (XAI)? - A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research

被引:255
作者
Langer, Markus [1 ]
Oster, Daniel [2 ]
Speith, Timo [2 ,3 ]
Hermanns, Holger [3 ,4 ]
Kaestner, Lena [2 ]
Schmidt, Eva [5 ]
Sesing, Andreas [6 ]
Baum, Kevin [2 ,3 ]
机构
[1] Saarland Univ, Dept Psychol, Saarbrucken, Germany
[2] Saarland Univ, Inst Philosophy, Saarbrucken, Germany
[3] Saarland Univ, Dept Comp Sci, Saarbrucken, Germany
[4] Inst Intelligent Software, Guangzhou, Peoples R China
[5] Tech Univ Dortmund, Inst Philosophy & Polit Sci, Dortmund, Germany
[6] Saarland Univ, Inst Legal Informat, Saarbrucken, Germany
关键词
Explainable Artificial Intelligence; Explainability; Interpretability; Explanations; Understanding; Interdisciplinary Research; Human-Computer Interaction; BLACK-BOX; THEORETICAL FOUNDATIONS; DECISION-MAKING; EXPLANATIONS; AUTOMATION; KNOWLEDGE; FRAMEWORK; COGNITION; SYSTEM; TRUST;
D O I
10.1016/j.artint.2021.103473
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these "stakeholders' desiderata") in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability of artificial systems and reviews their desiderata. We provide a model that explicitly spells out the main concepts and relations necessary to consider and investigate when evaluating, adjusting, choosing, and developing explainability approaches that aim to satisfy stakeholders' desiderata. This model can serve researchers from the variety of different disciplines involved in XAI as a common ground. It emphasizes where there is interdisciplinary potential in the evaluation and the development of explainability approaches. (C) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:24
相关论文
共 209 条
  • [1] Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
    Abdul, Ashraf
    Vermeulen, Jo
    Wang, Danding
    Lim, Brian
    Kankanhalli, Mohan
    [J]. PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
  • [2] Taking reading comprehension exams on screen or on paper? A metacognitive analysis of learning texts under time pressure
    Ackerman, Rakefet
    Lauterman, Tirza
    [J]. COMPUTERS IN HUMAN BEHAVIOR, 2012, 28 (05) : 1816 - 1828
  • [3] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [4] Al-Alaj A, 2019, PROCEEDINGS OF THE ACM INTERNATIONAL WORKSHOP ON SECURITY IN SOFTWARE DEFINED NETWORKS & NETWORK FUNCTION VIRTUALIZATION (SDN-NFV '19), P1, DOI [10.1145/3309194.3309195, 10.1109/ice.2019.8792597]
  • [5] Explanations of Black-Box Model Predictions by Contextual Importance and Utility
    Anjomshoae, Sule
    Framling, Kary
    Najjar, Amro
    [J]. EXPLAINABLE, TRANSPARENT AUTONOMOUS AGENTS AND MULTI-AGENT SYSTEMS, EXTRAAMAS 2019, 2019, 11763 : 95 - 109
  • [6] Anjomshoae S, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P1078
  • [7] [Anonymous], 2017, P IJCAI WORKSHOP EXP
  • [8] [Anonymous], 2017, UNDERSTANDING SCI UN
  • [9] [Anonymous], 2009, P 14 INT C INT US IN, DOI DOI 10.1145/1502650.1502661
  • [10] Arya V., 2019, CORR