SEEN: Sharpening Explanations for Graph Neural Networks Using Explanations From Neighborhoods

被引:0
作者
Cho, Hyeoncheol [1 ]
Oh, Youngrock [2 ]
Jeon, Eunjoo [1 ]
机构
[1] Samsung SDS, Seoul, South Korea
[2] Mobilint, Seoul, South Korea
来源
ADVANCES IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING | 2023年 / 3卷 / 02期
关键词
Explainable AI; Explanation Enhancement; Explainability Technique; Graph Neural Network;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explaining the foundations for predictions obtained from graph neural networks (GNNs) is critical for credible use of GNN models for real-world problems. Owing to the rapid growth of GNN applications, recent progress in explaining predictions from GNNs, such as sensitivity analysis, perturbation methods, and attribution methods, showed great opportunities and possibilities for explaining GNN predictions. In this study, we propose a method to improve the explanation quality of node classification tasks that can be applied in a post hoc manner through aggregation of auxiliary explanations from important neighboring nodes, named SEEN. Applying SEEN does not require modification of a graph and can be used with diverse explainability techniques due to its independent mechanism. Experiments on matching motif- participating nodes from a given graph show great improvement in explanation accuracy of up to 12.71% and demonstrate the correlation between the auxiliary explanations and the enhanced explanation accuracy through leveraging their contributions. SEEN provides a simple but effective method to enhance the explanation quality of GNN model outputs, and this method is applicable in combination with most explainability techniques.
引用
收藏
页码:1165 / 1179
页数:15
相关论文
共 35 条
  • [1] On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation
    Bach, Sebastian
    Binder, Alexander
    Montavon, Gregoire
    Klauschen, Frederick
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PLOS ONE, 2015, 10 (07):
  • [2] Baldassarre F., 2019, INT C MACH LEARN ICM
  • [3] Grad-CAM plus plus : Generalized Gradient-based Visual Explanations for Deep Convolutional Networks
    Chattopadhay, Aditya
    Sarkar, Anirban
    Howlader, Prantik
    Balasubramanian, Vineeth N.
    [J]. 2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 839 - 847
  • [4] Defferrard M, 2016, ADV NEUR IN, V29
  • [5] Duvenaudt D, 2015, ADV NEUR IN, V28
  • [6] Understanding Deep Networks via Extremal Perturbations and Smooth Masks
    Fong, Ruth
    Patrick, Mandela
    Vedaldi, Andrea
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2950 - 2958
  • [7] Interpretable Explanations of Black Boxes by Meaningful Perturbation
    Fong, Ruth C.
    Vedaldi, Andrea
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 3449 - 3457
  • [8] Gilmer J, 2017, PR MACH LEARN RES, V70
  • [9] Hamilton WL, 2017, ADV NEUR IN, V30
  • [10] Kazemi SM, 2018, ADV NEUR IN, V31