Perturb more, trap more: Understanding behaviors of graph neural networks

被引:2
作者
Ji, Chaojie [1 ]
Wang, Ruxin [1 ]
Wu, Hongyan [1 ]
机构
[1] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen, Peoples R China
基金
中国国家自然科学基金;
关键词
Graph neural networks; Explainability;
D O I
10.1016/j.neucom.2022.04.070
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
While graph neural networks (GNNs) have shown great potential in various graph-related tasks, their lack of transparency has hindered our understanding of how they arrive at their predictions. The fidelity to the local decision boundary of the original model, indicating how well the explainer fits the original model around the instance to be explained, is neglected by existing GNN explainers. In this paper, we first propose a novel post hoc framework based on local fidelity for any trained GNNs, called TraP2, which can generate a high-fidelity explanation. Considering that both the relevant graph structure and important features inside each node must be highlighted, a three-layer architecture in TraP2 is designed: i) the interpretation domain is defined by the Translation layer in advance; ii) the local predictive behaviors of the GNNs being explained are probed and monitored by the Perturbation layer, in which multiple perturbations for graph structure and feature level are conducted in the interpretation domain; and iii) highly faithful explanations are generated by fitting the local decision boundary of GNNs being explained through the Paraphrase layer. We evaluated TraP2 on several benchmark datasets under the four metrics of accuracy, area under receiver operating characteristic curve, fidelity, and contrastivity, and the results prove that it significantly outperforms state-of-the-art methods. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:59 / 75
页数:17
相关论文
共 49 条
[1]  
Abu-El-Haija S, 2019, PR MACH LEARN RES, V115, P841
[2]   DDGK: Learning Graph Representations for Deep Divergence Graph Kernels [J].
Al-Rfou, Rami ;
Zelle, Dustin ;
Perozzi, Bryan .
WEB CONFERENCE 2019: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2019), 2019, :37-48
[3]   Improving the Reliability of Deep Neural Networks in NLP: A Review [J].
Alshemali, Basemah ;
Kalita, Jugal .
KNOWLEDGE-BASED SYSTEMS, 2020, 191
[4]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[5]   SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation [J].
Badrinarayanan, Vijay ;
Kendall, Alex ;
Cipolla, Roberto .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) :2481-2495
[6]  
Baldassarre F., 2019, INT C MACHINE LEARNI
[7]  
Bruna J., 2014, P INT C LEARN REPR
[8]  
Chami I, 2019, ADV NEUR IN, V32
[9]   Grad-CAM plus plus : Generalized Gradient-based Visual Explanations for Deep Convolutional Networks [J].
Chattopadhay, Aditya ;
Sarkar, Anirban ;
Howlader, Prantik ;
Balasubramanian, Vineeth N. .
2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, :839-847
[10]   Multi-Label Image Recognition with Graph Convolutional Networks [J].
Chen, Zhao-Min ;
Wei, Xiu-Shen ;
Wang, Peng ;
Guo, Yanwen .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :5172-5181