Interpretable Model-Agnostic Explanations Based on Feature Relationships for High-Performance Computing

被引:4
作者
Chen, Zhouyuan [1 ]
Lian, Zhichao [1 ]
Xu, Zhe [1 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Cyberspace Secur, Nanjing 214400, Peoples R China
关键词
interpretability; model-agnostic explanations; feature relationship; super pixel;
D O I
10.3390/axioms12100997
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
In the explainable artificial intelligence (XAI) field, an algorithm or a tool can help people understand how a model makes a decision. And this can help to select important features to reduce computational costs to realize high-performance computing. But existing methods are usually used to visualize important features or highlight active neurons, and few of them show the importance of relationships between features. In recent years, some methods based on a white-box approach have taken relationships between features into account, but most of them can only work on some specific models. Although methods based on a black-box approach can solve the above problems, most of them can only be applied to tabular data or text data instead of image data. To solve these problems, we propose a local interpretable model-agnostic explanation approach based on feature relationships. This approach combines the relationships between features into the interpretation process and then visualizes the interpretation results. Finally, this paper conducts a lot of experiments to evaluate the correctness of relationships between features and evaluates this XAI method in terms of accuracy, fidelity, and consistency.
引用
收藏
页数:11
相关论文
共 36 条
  • [1] Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
    Aas, Kjersti
    Jullum, Martin
    Loland, Anders
    [J]. ARTIFICIAL INTELLIGENCE, 2021, 298
  • [2] Alvarez-Melis D, 2018, ADV NEUR IN, V31
  • [3] Nguyen A, 2016, ADV NEUR IN, V29
  • [4] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [5] Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
  • [6] Dikopoulou Z, 2021, Arxiv, DOI arXiv:2107.09927
  • [7] Greedy function approximation: A gradient boosting machine
    Friedman, JH
    [J]. ANNALS OF STATISTICS, 2001, 29 (05) : 1189 - 1232
  • [8] Ge Yuying, 2021, P IEEE CVF C COMP VI
  • [9] Graves A, 2013, INT CONF ACOUST SPEE, P6645, DOI 10.1109/ICASSP.2013.6638947
  • [10] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778