Evaluating Post-hoc Explanations for Graph Neural Networks via Robustness Analysis

被引:0
|
作者
Fang, Junfeng [1 ]
Liu, Wei [1 ]
Gao, Yuan [1 ]
Liu, Zemin [2 ]
Zhang, An [2 ]
Wang, Xiang [1 ,3 ]
He, Xiangnan [1 ,3 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Natl Univ Singapore, Singapore, Singapore
[3] Hefei Comprehens Natl Sci Ctr, Inst Artificial Intelligence, Inst Dataspace, Hefei, Peoples R China
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
基金
中国国家自然科学基金;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This work studies the evaluation of explaining graph neural networks (GNNs), which is crucial to the credibility of post-hoc explainability in practical usage. Conventional evaluation metrics, and even explanation methods- which mainly follow the paradigm of feeding the explanatory subgraph to the model and measuring output difference - mostly suffer from the notorious out-of-distribution (OOD) issue. Hence, in this work, we endeavor to confront this issue by introducing a novel evaluation metric, termed OOD-resistant Adversarial Robustness (OAR). Specifically, we draw inspiration from adversarial robustness and evaluate post-hoc explanation subgraphs by calculating their robustness under attack. On top of that, an elaborate OOD reweighting block is inserted into the pipeline to confine the evaluation process to the original data distribution. For applications involving large datasets, we further devise a Simplified version of OAR (SimOAR), which achieves a significant improvement in computational efficiency at the cost of a small amount of performance. Extensive empirical studies validate the effectiveness of our OAR and SimOAR. Code is available at https://github.com/MangoKiller/SimOAR_OAR.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] Heterogeneous graph neural networks with post-hoc explanations for multi-modal and explainable land use inference
    Zhai, Xuehao
    Jiang, Junqi
    Dejl, Adam
    Rago, Antonio
    Guo, Fangce
    Toni, Francesca
    Sivakumar, Aruna
    INFORMATION FUSION, 2025, 120
  • [2] Evaluating Stability of Post-hoc Explanations for Business Process Predictions
    Velmurugan, Mythreyi
    Ouyang, Chun
    Moreira, Catarina
    Sindhgatta, Renuka
    SERVICE-ORIENTED COMPUTING (ICSOC 2021), 2021, 13121 : 49 - 64
  • [3] Ontology-Based Post-Hoc Neural Network Explanations Via Simultaneous Concept Extraction
    Ponomarev, Andrew
    Agafonov, Anton
    INTELLIGENT SYSTEMS AND APPLICATIONS, VOL 2, INTELLISYS 2023, 2024, 823 : 433 - 446
  • [4] Evaluating Link Prediction Explanations for Graph Neural Networks
    Borile, Claudio
    Perotti, Alan
    Panisson, Andre
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT II, 2023, 1902 : 382 - 401
  • [5] When are Post-hoc Conceptual Explanations Identifiable?
    Leemann, Tobias
    Kirchhof, Michael
    Rong, Yao
    Kasneci, Enkelejda
    Kasneci, Gjergji
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 1207 - 1218
  • [6] Ontology-Based Post-Hoc Explanations via Simultaneous Concept Extraction
    Ponomarev, Andrew
    Agafonov, Anton
    2022 21ST IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS, ICMLA, 2022, : 887 - 890
  • [7] Limitations of Post-Hoc Feature Alignment for Robustness
    Burns, Collin
    Steinhardt, Jacob
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2525 - 2533
  • [8] Generating Recommendations with Post-Hoc Explanations for Citizen Science
    Ben Zaken, Daniel
    Shani, Guy
    Segal, Avi
    Cavalier, Darlene
    Gal, Kobi
    PROCEEDINGS OF THE 30TH ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, UMAP 2022, 2022, : 69 - 78
  • [9] The Dangers of Post-hoc Interpretability: Unjustified Counterfactual Explanations
    Laugel, Thibault
    Lesot, Marie-Jeanne
    Marsala, Christophe
    Renard, Xavier
    Detyniecki, Marcin
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 2801 - 2807
  • [10] An Empirical Comparison of Interpretable Models to Post-Hoc Explanations
    Mahya, Parisa
    Fuernkranz, Johannes
    AI, 2023, 4 (02) : 426 - 436