Explanatory pragmatism: a context-sensitive framework for explainable medical AI

被引:21
作者
Nyrup, Rune [1 ,2 ]
Robinson, Diana [1 ,3 ,4 ]
机构
[1] Univ Cambridge, Leverhulme Ctr Future Intelligence, Cambridge, England
[2] Univ Cambridge, Dept Hist & Philosophy Sci, Cambridge, England
[3] Univ Cambridge, Dept Comp Sci, Cambridge, England
[4] Microsoft Res, Cambridge, England
基金
英国惠康基金;
关键词
Explainable artificial intelligence; XAI; Medical artificial intelligence; Explanation; Understanding; Ethics of artificial intelligence; DEFAULT MODE NETWORK; ARTIFICIAL-INTELLIGENCE; CONNECTIVITY; PHILOSOPHY; EXPERIENCE;
D O I
10.1007/s10676-022-09632-3
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we seek to address in this paper. We outline a framework, called Explanatory Pragmatism, which we argue has two attractive features. First, it allows us to conceptualise explainability in explicitly context-, audience- and purpose-relative terms, while retaining a unified underlying definition of explainability. Second, it makes visible any normative disagreements that may underpin conflicting claims about explainability regarding the purposes for which explanations are sought. Third, it allows us to distinguish several dimensions of AI explainability. We illustrate this framework by applying it to a case study involving a machine learning model for predicting whether patients suffering disorders of consciousness were likely to recover consciousness.
引用
收藏
页数:15
相关论文
共 79 条
[1]  
[Anonymous], 2014, EXPLANATION BIOL HIS
[2]  
[Anonymous], 2018, 2018 ICML WORKSH HUM
[3]  
Austin John L., 1962, DO THINGS WORDS, P7
[4]   Assessing risk, automating racism [J].
Benjamin, Ruha .
SCIENCE, 2019, 366 (6464) :421-422
[5]  
Besold T.R., 2018, ARXIV180807074
[6]  
Biran O., 2017, IJCAI-17 workshop on explainable AI (XAI), P1
[7]  
Bjerring JC., 2021, Philos Technol, V34, P349, DOI DOI 10.1007/S13347-019-00391-6
[8]   Meditation experience is associated with differences in default mode network activity and connectivity [J].
Brewer, Judson A. ;
Worhunsky, Patrick D. ;
Gray, Jeremy R. ;
Tang, Yi-Yuan ;
Weber, Jochen ;
Kober, Hedy .
PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2011, 108 (50) :20254-20259
[9]  
Buolamwini J., 2018, PMLR, P77
[10]   How the machine 'thinks': Understanding opacity in machine learning algorithms [J].
Burrell, Jenna .
BIG DATA & SOCIETY, 2016, 3 (01) :1-12