G-LIME: Statistical learning for local interpretations of deep neural networks using global priors

被引:27
作者
Li, Xuhong [1 ]
Xiong, Haoyi [1 ]
Li, Xingjian [1 ]
Zhang, Xiao [2 ]
Liu, Ji [1 ]
Jiang, Haiyan [1 ]
Chen, Zeyu [1 ]
Dou, Dejing [1 ]
机构
[1] Baidu Inc, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Explainable AI (XAI); Interpretable deep learning; SELECTION; REGRESSION;
D O I
10.1016/j.artint.2022.103823
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To explain the prediction result of a Deep Neural Network (DNN) model based on a given sample, LIME [1] and its derivatives have been proposed to approximate the local behavior of the DNN model around the data point via linear surrogates. Though these algorithms interpret the DNN by finding the key features used for classification, the random interpolations used by LIME would perturb the explanation result and cause the instability and inconsistency between repetitions of LIME computations. To tackle this issue, we propose Q-LIME that extends the vanilla LIME through high-dimensional Bayesian linear regression using the sparsity and informative global priors. Specifically, with a dataset representing the population of samples (e.g., the training set), Q-LIME first pursues the global explanation of the DNN model using the whole dataset. Then, with a new data point, Q-LIME incorporates an modified estimator of ElasticNet-alike to refine the local explanation result through balancing the distance to the global explanation and the sparsity/feature selection in the explanation. Finally, Q-LIME uses Least Angle Regression (LARS) and retrieves the solution path of a modified ElasticNet under varying L1-regularization, to screen and rank the importance of features [2] as the explanation result. Through extensive experiments on real world tasks, we show that the proposed method yields more stable, consistent, and accurate results compared to LIME.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:19
相关论文
共 75 条
[61]  
Tibshirani Ryan, 2013, DATA MINING, V36, P462
[62]  
van der Linden Ilse, 2019, WORKSHOP FAIRNESS AC
[63]   Evaluating XAI: A comparison of rule-based and example-based explanations [J].
van der Waa, Jasper ;
Nieuwburg, Elisabeth ;
Cremers, Anita ;
Neerincx, Mark .
ARTIFICIAL INTELLIGENCE, 2021, 291
[64]  
Vedaldi A, 2008, LECT NOTES COMPUT SC, V5305, P705, DOI 10.1007/978-3-540-88693-8_52
[65]  
Victor Paolo, 2019, PHILIPP STAT, V68, P41
[66]  
Visani G, 2022, Arxiv, DOI arXiv:2006.05714
[67]   ALGORITHMIC COMPLEXITY - 3 NP-HARD PROBLEMS IN COMPUTATIONAL STATISTICS [J].
WELCH, WJ .
JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 1982, 15 (01) :17-25
[68]  
Welinder P, 2010, CNSTR2010001 CALTECH, P200, DOI DOI 10.1109/ICCV.2017.309
[69]   Covariance-regularized regression and classification for high dimensional problems [J].
Witten, Daniela M. ;
Tibshirani, Robert .
JOURNAL OF THE ROYAL STATISTICAL SOCIETY SERIES B-STATISTICAL METHODOLOGY, 2009, 71 :615-636
[70]  
Yang Mengjiao, 2019, arXiv