Faithful and Customizable Explanations of Black Box Models

被引:152
作者
Lakkaraju, Himabindu [1 ]
Kamar, Ece [2 ]
Caruana, Rich [2 ]
Leskovec, Jure [3 ]
机构
[1] Harvard Univ, Cambridge, MA 02138 USA
[2] Microsoft Res, Redmond, WA USA
[3] Stanford Univ, Stanford, CA 94305 USA
来源
AIES '19: PROCEEDINGS OF THE 2019 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY | 2019年
关键词
Interpretable machine learning; Decision making; Black box models;
D O I
10.1145/3306618.3314229
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As predictive models increasingly assist human experts (e.g., doctors) in day-to-day decision making, it is crucial for experts to be able to explore and understand how such models behave in different feature subspaces in order to know if and when to trust them. To this end, we propose Model Understanding through Subspace Explanations (MUSE), a novel model agnostic framework which facilitates understanding of a given black box model by explaining how it behaves in subspaces characterized by certain features of interest. Our framework provides end users (e.g., doctors) with the flexibility of customizing the model explanations by allowing them to input the features of interest. The construction of explanations is guided by a novel objective function that we propose to simultaneously optimize for fidelity to the original model, unambiguity and interpretability of the explanation. More specifically, our objective allows us to learn, with optimality guarantees, a small number of compact decision sets each of which captures the behavior of a given black box model in unambiguous, well-defined regions of the feature space. Experimental evaluation with real-world datasets and user studies demonstrate that our approach can generate customizable, highly compact, easy-to-understand, yet accurate explanations of various kinds of predictive models compared to state-of-the-art baselines.
引用
收藏
页码:131 / 138
页数:8
相关论文
共 15 条
[11]  
Rokach L., 2005, T SYS MAN CYBER C, V35, P476
[12]  
Shrikumar Avanti, 2016, NOT JUST BLACK BOX L
[13]  
Teoh S.T., 2003, 9 ACM SIGKDD INT C K, P667
[14]  
Yosinski Jason, 2015, Understanding neural networks through deep visualization, DOI DOI 10.1007/978-3-319-10590-153
[15]  
Zintgraf Luisa M, 2017, INT C LEARN REPR