Post-hoc explanation of black-box classifiers using confident itemsets

被引:76
作者
Moradi, Milad [1 ]
Samwald, Matthias [1 ]
机构
[1] Med Univ Vienna, Ctr Med Stat Informat & Intelligent Syst, Inst Artificial Intelligence & Decis Support, Vienna, Austria
关键词
Explainable artificial intelligence; Machine learning; Post-hoc explanation; Confident itemsets; Interpretability; Fidelity; RULES;
D O I
10.1016/j.eswa.2020.113941
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Black-box Artificial Intelligence (AI) methods, e.g. deep neural networks, have been widely utilized to build predictive models that can extract complex relationships in a dataset and make predictions for new unseen data records. However, it is difficult to trust decisions made by such methods since their inner working and decision logic is hidden from the user. Explainable Artificial Intelligence (XAI) refers to systems that try to explain how a black-box AI model produces its outcomes. Post-hoc XAI methods approximate the behavior of a black-box by extracting relationships between feature values and the predictions. Perturbation-based and decision set methods are among commonly used post-hoc XAI systems. The former explanators rely on random perturbations of data records to build local or global linear models that explain individual predictions or the whole model. The latter explanators use those feature values that appear more frequently to construct a set of decision rules that produces the same outcomes as the target black-box. However, these two classes of XAI methods have some limitations. Random perturbations do not take into account the distribution of feature values in different subspaces, leading to misleading approximations. Decision sets only pay attention to frequent feature values and miss many important correlations between features and class labels that appear less frequently but accurately represent decision boundaries of the model. In this paper, we address the above challenges by proposing an explanation method named Confident Itemsets Explanation (CIE). We introduce confident itemsets, a set of feature values that are highly correlated to a specific class label. CIE utilizes confident itemsets to discretize the whole decision space of a model to smaller subspaces. Extracting important correlations between the features and the outcomes of the classifier in different subspaces, CIE produces instance-wise and class-wise explanations that accurately approximate the behavior of the target black-box. Conducting a set of experiments on various black-box classifiers, and different tabular and textual data classification tasks, we show that our CIE method performs better than the previous perturbation-based and rule-based explanators in terms of the descriptive accuracy (an improvement of 9.3%) and interpretability (an improvement of 8.8%) of the explanations. Subjective evaluations demonstrate that the users find the explanations of CIE more understandable and interpretable than those of the other comparison methods.
引用
收藏
页数:14
相关论文
共 42 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
[Anonymous], 2016, ARXIV161107115
[3]  
[Anonymous], 2016, ARXIV161009036
[4]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[5]   Techniques for Interpretable Machine Learning [J].
Du, Mengnan ;
Li, Ninghao ;
Hu, Xia .
COMMUNICATIONS OF THE ACM, 2020, 63 (01) :68-77
[6]   Interpretable Explanations of Black Boxes by Meaningful Perturbation [J].
Fong, Ruth C. ;
Vedaldi, Andrea .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :3449-3457
[7]  
Guidotti R., 2018, ArXiv, V1805, P10820
[8]   A Survey of Methods for Explaining Black Box Models [J].
Guidotti, Riccardo ;
Monreale, Anna ;
Ruggieri, Salvatore ;
Turin, Franco ;
Giannotti, Fosca ;
Pedreschi, Dino .
ACM COMPUTING SURVEYS, 2019, 51 (05)
[9]  
Hastie T., 2009, The elements of statistical learning: data mining, inference, and prediction, Vsecond, DOI [DOI 10.1007/978-0-387-84858-7_10, DOI 10.1007/978-0-387-84858-7SPS10]
[10]  
Hochreiter S, 1997, NEURAL COMPUT, V9, P1735, DOI [10.1162/neco.1997.9.1.1, 10.1007/978-3-642-24797-2]