Explainable software systems: from requirements analysis to system evaluation

被引:11
作者
Chazette, Larissa [1 ]
Brunotte, Wasja [1 ,2 ]
Speith, Timo [3 ,4 ]
机构
[1] Leibniz Univ Hannover, Software Engn Grp, Hannover, Germany
[2] Leibniz Univ Hannover, Cluster Excellence PhoenixD, Hannover, Germany
[3] Univ Bayreuth, Chair Philosophy Comp Sci & Artificial Intelligen, Bayreuth, Germany
[4] Saarland Univ, Ctr Perspicuous Comp, Saarbrucken, Germany
关键词
Explainability; Explainable artificial intelligence; Non-functional requirements; Quality aspects; Conceptual model; Reference model; Knowledge catalogue; ARTIFICIAL-INTELLIGENCE; EXPLANATIONS; TAXONOMY; MODEL;
D O I
10.1007/s00766-022-00393-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The growing complexity of software systems and the influence of software-supported decisions in our society sparked the need for software that is transparent, accountable, and trustworthy. Explainability has been identified as a means to achieve these qualities. It is recognized as an emerging non-functional requirement (NFR) that has a significant impact on system quality. Accordingly, software engineers need means to assist them in incorporating this NFR into systems. This requires an early analysis of the benefits and possible design issues that arise from interrelationships between different quality aspects. However, explainability is currently under-researched in the domain of requirements engineering, and there is a lack of artifacts that support the requirements engineering process and system design. In this work, we remedy this deficit by proposing four artifacts: a definition of explainability, a conceptual model, a knowledge catalogue, and a reference model for explainable systems. These artifacts should support software and requirements engineers in understanding the definition of explainability and how it interacts with other quality aspects. Besides that, they may be considered a starting point to provide practical value in the refinement of explainability from high-level requirements to concrete design choices, as well as on the identification of methods and metrics for the evaluation of the implemented requirements.
引用
收藏
页码:457 / 487
页数:31
相关论文
共 149 条
[1]  
Abdollahi B, 2018, HUM-COMPUT INT-SPRIN, P21, DOI 10.1007/978-3-319-90403-0_2
[2]   Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda [J].
Abdul, Ashraf ;
Vermeulen, Jo ;
Wang, Danding ;
Lim, Brian ;
Kankanhalli, Mohan .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[3]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[4]  
Alani Mohammed M., 2014, Guide to OSI and TCP/IP Models, P5, DOI DOI 10.1007/978-3-319-05152-9
[5]  
Alexander I.F., 2004, CAISE 2004 WORKSH CO, P215
[6]  
Anjomshoae S, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P1078
[7]  
[Anonymous], 2009, Growing Object-Oriented Software, Guided by Tests
[8]  
[Anonymous], 2000, P C FUT SOFTW ENG LI
[9]  
[Anonymous], 2014, EASE 14, DOI [10.1145/2601248.2601268.10, DOI 10.1145/2601248.2601268]
[10]  
[Anonymous], 2017, P 1 INT WORKSH COMPR