Artificial intelligence explainability: the technical and ethical dimensions

被引:45
作者
McDermid, John A. [1 ]
Jia, Yan [1 ]
Porter, Zoe [1 ]
Habli, Ibrahim [1 ]
机构
[1] Univ York, Dept Comp Sci, Deramore Lane, York YO10 5GH, N Yorkshire, England
来源
PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES | 2021年 / 379卷 / 2207期
基金
英国工程与自然科学研究理事会;
关键词
explainability; machine learning; assurance; NEURAL-NETWORKS; CLASSIFICATION; EXPLANATIONS; PREDICTION; MODELS;
D O I
10.1098/rsta.2020.0363
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as 'AI explainability' or 'XAI' methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require. This article is part of the theme issue 'Towards symbiotic autonomous systems'.
引用
收藏
页数:18
相关论文
共 78 条
  • [1] AAMODT A, 1994, AI COMMUN, V7, P39
  • [2] Ambrosino Nicolino, 2010, Expert Rev Respir Med, V4, P685, DOI 10.1586/ers.10.58
  • [3] Ancona M., 2017, BETTER UNDERSTANDING
  • [4] [Anonymous], 2017, INT C LEARNING REPRE
  • [5] [Anonymous], 2017, INT C MACH LEARN
  • [6] [Anonymous], 2017, Understanding black-box predictions via influence functions
  • [7] Deep Reinforcement Learning A brief survey
    Arulkumaran, Kai
    Deisenroth, Marc Peter
    Brundage, Miles
    Bharath, Anil Anthony
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) : 26 - 38
  • [8] Dynamic Assurance Cases: A Pathway to Trusted Autonomy
    Asaadi, Erfan
    Denney, Ewen
    Menzies, Jonathan
    Pai, Ganesh J.
    Petroff, Dimo
    [J]. COMPUTER, 2020, 53 (12) : 35 - 46
  • [9] Ashmore R., 2019, ASSURING MACHINE LEA
  • [10] Big Data's Disparate Impact
    Barocas, Solon
    Selbst, Andrew D.
    [J]. CALIFORNIA LAW REVIEW, 2016, 104 (03) : 671 - 732