Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis

被引:0
|
作者
Fang, Junfeng [1 ]
Liu, Wei [1 ]
Gao, Yuan [1 ]
Liu, Zemin [2 ]
Zhang, An [2 ]
Wang, Xiang [1 ,3 ]
He, Xiangnan [1 ,3 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Natl Univ Singapore, Singapore, Singapore
[3] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Inst Dataspace, Hefei, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work studies the evaluation of explaining graph neural networks (GNNs), which is crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation metrics, and even explanation methods- which mainly follow the paradigm of feeding the explanatory subgraph to the model and measuring output difference - mostly suffer from the notorious out-of-distribution (OOD) issue. Hence, in this work, we endeavor to confront this issue by introducing a novel evaluation metric, termed OOD-resistant Adversarial Robustness (OAR). Specifically, we draw inspiration from adversarial robustness and evaluate post-hoc explanation subgraphs by calculating their robustness under attack. On top of that, an elaborate OOD reweighting block is inserted into the pipeline to confine the evaluation process to the original data distribution. For applications involving large datasets, we further devise a Simplified version of OAR (SimOAR), which achieves a significant improvement in computational efficiency at the cost of a small amount of performance. Extensive empirical studies validate the effectiveness of our OAR and SimOAR. Code is available at https://github.com/MangoKiller/SimOAR_OAR.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Robustness of Graph Neural Networks at Scale
    Geisler, Simon
    Schmidt, Tobias
    Sirin, Hakan
    Zuegner, Daniel
    Bojchevski, Aleksandar
    Guennemann, Stephan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [22] Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists
    Retzlaff, Carl O.
    Angerschmid, Alessa
    Saranti, Anna
    Schneeberger, David
    Roettger, Richard
    Mueller, Heimo
    Holzinger, Andreas
    COGNITIVE SYSTEMS RESEARCH, 2024, 86
  • [23] Robustness Analysis on Graph Neural Networks Model for Event Detection
    Wei, Hui
    Zhu, Hanqing
    Wu, Jibing
    Xiao, Kaiming
    Huang, Hongbin
    APPLIED SCIENCES-BASEL, 2022, 12 (21):
  • [24] Higher-Order Explanations of Graph Neural Networks via Relevant Walks
    Schnake, Thomas
    Eberle, Oliver
    Lederer, Jonas
    Nakajima, Shinichi
    Schuett, Kristof T.
    Mueller, Klaus-Robert
    Montavon, Gregoire
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (11) : 7581 - 7596
  • [25] Subgroup analysis versus post-hoc analysis
    Mullins, CD
    CLINICAL THERAPEUTICS, 2001, 23 (07) : 1060 - 1060
  • [26] GNNExplainer: Generating Explanations for Graph Neural Networks
    Ying, Rex
    Bourgeois, Dylan
    You, Jiaxuan
    Zitnik, Marinka
    Leskovec, Jure
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [27] Generative Causal Explanations for Graph Neural Networks
    Lin, Wanyu
    Lan, Hao
    Li, Baochun
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [28] Game Theoretic Explanations for Graph Neural Networks
    Kamal, Ataollah
    Robardet, Celine
    Plantevit, Marc
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT III, 2025, 2135 : 217 - 232
  • [29] Robust Counterfactual Explanations on Graph Neural Networks
    Bajaj, Mohit
    Chu, Lingyang
    Xue, Zi Yu
    Pei, Jian
    Wang, Lanjun
    Lam, Peter Cho-Ho
    Zhang, Yong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [30] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57