Revisiting the Performance-Explainability Trade -Off in Explainable Artificial Intelligence (XAI)

被引:12
作者
Crook, Barnahy [1 ]
Schueter, Maximilian [2 ]
Speith, Timo [1 ,3 ]
机构
[1] Univ Bayreuth, Dept Philosophy, Bayreuth, Germany
[2] Tech Univ Dortmund, Programming Syst, Dortmund, Germany
[3] Saarland Univ, Ctr Perspicuous Comp, Saarbrucken, Germany
来源
2023 IEEE 31ST INTERNATIONAL REQUIREMENTS ENGINEERING CONFERENCE WORKSHOPS, REW | 2023年
关键词
Artificial Intelligence; AI; Explainability; Explainable Artificial Intelligence; Performance; Non-Functional Requirements; NFR; XAI; Trade -Off Analysis; Accuracy; DEEP NEURAL-NETWORKS; RECOGNITION; DECISIONS; GAME; GO;
D O I
10.1109/REW57809.2023.00060
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Within the field of Requirements Engineering (RE), the increasing significance of Explainable Artificial Intelligence (XAI) in aligning Al-supported systems with user needs, societal expectations, and regulatory standards has garnered recognition. In general, explainability has emerged as an important nonfunctional requirement that impacts system quality. However, the supposed trade-off between explainability and performance challenges the presumed positive influence of explainability. If meeting the requirement of explainability entails a reduction in system performance, then careful consideration must be given to which of these quality aspects takes precedence and how to compromise between them. In this paper, we critically examine the alleged trade-off. We argue that it is best approached in a nuanced way that incorporates resource availability, domain characteristics, and considerations of risk. By providing a foundation for future research and best practices, this work aims to advance the field of RE for Al.
引用
收藏
页码:316 / 324
页数:9
相关论文
共 80 条
[1]  
Adebayo J, 2018, ADV NEUR IN, V31
[2]   Evaluating the faithfulness of saliency maps in explaining deep learning models using realistic perturbations [J].
Amorim, Jose P. ;
Abreu, Pedro H. ;
Santos, Joao ;
Cortes, Marc ;
Vila, Victor .
INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (02)
[3]   To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods [J].
Amparore, Elvio ;
Perotti, Alan ;
Bajardi, Paolo .
PEERJ COMPUTER SCIENCE, 2021, :1-26
[4]   Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability [J].
Ananny, Mike ;
Crawford, Kate .
NEW MEDIA & SOCIETY, 2018, 20 (03) :973-989
[5]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[6]  
Baum K, 2022, Philosophy Technology, P1, DOI DOI 10.1007/S13347-022-00510-W
[7]   Reconciling modern machine-learning practice and the classical bias-variance trade-off [J].
Belkin, Mikhail ;
Hsu, Daniel ;
Ma, Siyuan ;
Mandal, Soumik .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (32) :15849-15854
[8]  
Bell Andrew, 2022, FAccT '22: 2022 ACM Conference on Fairness, Accountability, and Transparency, P248, DOI 10.1145/3531146.3533090
[9]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[10]  
Brown TB, 2020, ADV NEUR IN, V33