Towards Faithful and Consistent Explanations for Graph Neural Networks

被引:7
作者
Zhao, Tianxiang [1 ]
Luo, Dongsheng [2 ]
Zhang, Xiang [1 ]
Wang, Suhang [1 ]
机构
[1] Penn State Univ, Coll Informat Sci & Technol, State Coll, PA 16802 USA
[2] Florida Int Univ, Knight Fdn Sch Comp & Informat Sci, Miami, FL USA
来源
PROCEEDINGS OF THE SIXTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING, WSDM 2023, VOL 1 | 2023年
基金
美国国家科学基金会;
关键词
graph neural networks; explainability;
D O I
10.1145/3539597.3570421
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Uncovering rationales behind predictions of graph neural networks (GNNs) has received increasing attention over recent years. Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions. Though various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions. However, an inductive bias is deep-rooted in this framework: several subgraphs can result in the same or similar outputs as the original graphs. Consequently, they have the danger of providing spurious explanations and fail to provide consistent explanations. Applying them to explain weakly-performed GNNs would further amplify these issues. To address this problem, we theoretically examine the predictions of GNNs from the causality perspective. Two typical reasons of spurious explanations are identified: confounding effect of latent variables like distribution shift, and causal factors distinct from the original input. Observing that both confounding effects and diverse causal rationales are encoded in internal representations, we propose a simple yet effective countermeasure by aligning embeddings. Concretely, concerning potential shifts in the high-dimensional space, we design a distribution-aware alignment algorithm based on anchors. This new objective is easy to compute and can be incorporated into existing techniques with no or little effort. Theoretical analysis shows that it is in effect optimizing a more faithful explanation objective in design, which further justifies the proposed approach.
引用
收藏
页码:634 / 642
页数:9
相关论文
共 44 条
[1]  
Atwood J, 2016, ADV NEUR IN, V29
[2]  
Baldassarre F, 2019, Arxiv, DOI [arXiv:1905.13686, DOI 10.48550/ARXIV.1905.13686, 10.48550/arXiv.1905.13686]
[3]  
Bruna J, 2014, Arxiv, DOI arXiv:1312.6203
[4]   Utilizing Molecular Network Information via Graph Convolutional Neural Networks to Predict Metastatic Event in Breast Cancer [J].
Chereda, Hryhorii ;
Bleckmann, Annalen ;
Kramer, Frank ;
Leha, Andreas ;
Beissbarth, Tim .
GERMAN MEDICAL DATA SCIENCES: SHAPING CHANGE - CREATIVE SOLUTIONS FOR INNOVATIVE MEDICINE (GMDS 2019), 2019, 267 :181-186
[5]   Towards Self-Explainable Graph Neural Network [J].
Dai, Enyan ;
Wang, Suhang .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, :302-311
[6]  
Dai EY, 2023, Arxiv, DOI [arXiv:2204.08570, 10.48550/arXiv.2204.08570, DOI 10.48550/ARXIV.2204.08570]
[7]  
Dai Enyan, 2022, arXiv
[8]  
Duvenaudt D, 2015, ADV NEUR IN, V28
[9]  
Ester M., 1996, Proceedings of the Second International Conference on Knowledge Discovery and Data Mining, P226, DOI DOI 10.5555/3001460.3001507
[10]   When Comparing to Ground Truth is Wrong: On Evaluating GNN Explanation Methods [J].
Faber, Lukas ;
Moghaddam, Amin K. ;
Wattenhofer, Roger .
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, :332-341