G-LIME: Statistical learning for local interpretations of deep neural networks using global priors

被引:27
作者
Li, Xuhong [1 ]
Xiong, Haoyi [1 ]
Li, Xingjian [1 ]
Zhang, Xiao [2 ]
Liu, Ji [1 ]
Jiang, Haiyan [1 ]
Chen, Zeyu [1 ]
Dou, Dejing [1 ]
机构
[1] Baidu Inc, Beijing, Peoples R China
[2] Tsinghua Univ, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Explainable AI (XAI); Interpretable deep learning; SELECTION; REGRESSION;
D O I
10.1016/j.artint.2022.103823
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To explain the prediction result of a Deep Neural Network (DNN) model based on a given sample, LIME [1] and its derivatives have been proposed to approximate the local behavior of the DNN model around the data point via linear surrogates. Though these algorithms interpret the DNN by finding the key features used for classification, the random interpolations used by LIME would perturb the explanation result and cause the instability and inconsistency between repetitions of LIME computations. To tackle this issue, we propose Q-LIME that extends the vanilla LIME through high-dimensional Bayesian linear regression using the sparsity and informative global priors. Specifically, with a dataset representing the population of samples (e.g., the training set), Q-LIME first pursues the global explanation of the DNN model using the whole dataset. Then, with a new data point, Q-LIME incorporates an modified estimator of ElasticNet-alike to refine the local explanation result through balancing the distance to the global explanation and the sparsity/feature selection in the explanation. Finally, Q-LIME uses Least Angle Regression (LARS) and retrieves the solution path of a modified ElasticNet under varying L1-regularization, to screen and rank the importance of features [2] as the explanation result. Through extensive experiments on real world tasks, we show that the proposed method yields more stable, consistent, and accurate results compared to LIME.(c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:19
相关论文
共 75 条
[1]   Explaining individual predictions when features are dependent: More accurate approximations to Shapley values [J].
Aas, Kjersti ;
Jullum, Martin ;
Loland, Anders .
ARTIFICIAL INTELLIGENCE, 2021, 298
[2]  
Ahern I., 2019, PREPRINT
[3]  
Alvarez-Melis D, 2018, ADV NEUR IN, V31
[4]  
Alvarez-Melis D, 2018, Arxiv, DOI arXiv:1806.08049
[5]   SAM: The Sensitivity of Attribution Methods to Hyperparameters [J].
Bansal, Naman ;
Agarwal, Chirag ;
Anh Nguyen .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, :11-21
[6]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[7]  
Boopathy A., 2020, P INT C MACH LEARN, V119, P1014
[8]  
Boyd SP., 2004, Convex Optimization
[9]  
Chen JF, 2019, ADV NEUR IN, V32
[10]  
Deng J, 2010, 2009 IEEE C COMPUTER, P248, DOI [10.1109/CVPR, DOI 10.1109/CVPR.2009.5206848]