Exploring Explainability: A Definition, a Model, and a Knowledge Catalogue

被引:64
作者
Chazette, Larissa [1 ]
Brunotte, Wasja [1 ,2 ]
Speith, Timo [3 ,4 ]
机构
[1] Leibniz Univ Hannover, Software Engn Grp, Hannover, Germany
[2] Leibniz Univ Hannover, Cluster Excellence PhoenixD, Hannover, Germany
[3] Saarland Univ, Inst Philosophy, Saarbrucken, Germany
[4] Saarland Univ, Dept Comp Sci, Saarbrucken, Germany
来源
29TH IEEE INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE (RE 2021) | 2021年
关键词
Explainability; Explanations; Explainable Artificial Intelligence; Interpretability; Non-Functional Requirements; Quality Aspects; Requirements Synergy; Software Transparency; ARTIFICIAL-INTELLIGENCE;
D O I
10.1109/RE51729.2021.00025
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
The growing complexity of software systems and the influence of software-supported decisions in our society awoke the need for software that is transparent, accountable, and trust-worthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. However, in order to incorporate this NFR into systems, we need to understand what explainability means from a software engineering perspective and how it impacts other quality aspects in a system. This allows for an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. Nevertheless, explainability is currently under-researched in the domain of requirements engineering and there is a lack of conceptual models and knowledge catalogues that support the requirements engineering process and system design. In this work, we bridge this gap by proposing a definition, a model, and a catalogue for explainability. They illustrate how explainability interacts with other quality aspects and how it may impact various quality dimensions of a system. To this end, we conducted an interdisciplinary Systematic Literature Review and validated our findings with experts in workshops.
引用
收藏
页码:197 / 208
页数:12
相关论文
共 88 条
[1]  
Abdollahi B, 2018, HUM-COMPUT INT-SPRIN, P21, DOI 10.1007/978-3-319-90403-0_2
[2]   Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda [J].
Abdul, Ashraf ;
Vermeulen, Jo ;
Wang, Danding ;
Lim, Brian ;
Kankanhalli, Mohan .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[3]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[4]  
Anjomshoae S, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P1078
[5]  
[Anonymous], 2007, WER
[6]  
[Anonymous], 2004, Corporate Governance: The International Journal of Business in Society, DOI DOI 10.1108/14720700410558862
[7]  
[Anonymous], 2018, IJCAI WORKSH EXPL AR
[8]  
Aydemir FB, 2018, 2018 IEEE/ACM INTERNATIONAL WORKSHOP ON SOFTWARE FAIRNESS (FAIRWARE 2018), P15, DOI [10.1145/3194770.3194778, 10.23919/FAIRWARE.2018.8452915]
[9]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[10]   'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions [J].
Binns, Reuben ;
Van Kleek, Max ;
Veale, Michael ;
Lyngs, Ulrik ;
Zhao, Jun ;
Shadbolt, Nigel .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,