Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation

被引:3
|
作者
Chu, Zhaoyang [1 ,6 ]
Wan, Yao [1 ,6 ]
Li, Qian [2 ]
Wu, Yang [1 ,6 ]
Zhang, Hongyu [3 ]
Sui, Yulei [4 ]
Xu, Guandong [5 ]
Jin, Hai [1 ,6 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan, Peoples R China
[2] Curtin Univ, Sch Elect Engn Comp & Math Sci, Perth, WA, Australia
[3] Chongqing Univ, Sch Big Data & Software Engn, Chongqing, Peoples R China
[4] Univ New South Wales, Sch Comp Sci & Engn, Kensington, NSW, Australia
[5] Univ Technol Sydney, Sch Comp Sci, Ultimo, Australia
[6] Huazhong Univ Sci & Technol, Natl Engn Res Ctr Big Data Technol & Syst, Serv Comp Technol & Syst Lab, Cluster & Grid Comp Lab, Wuhan 430074, Peoples R China
关键词
Vulnerability detection; graph neural networks; model explainability; counterfactual reasoning; what-if analysis;
D O I
10.1145/3650212.3652136
中图分类号
学科分类号
摘要
Vulnerability detection is crucial for ensuring the security and reliability of software systems. Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection, owing to their ability to capture the underlying semantic structure of source code. However, GNNs face significant challenges in explainability due to their inherently black-box nature. To this end, several factual reasoning-based explainers have been proposed. These explainers provide explanations for the predictions made by GNNs by analyzing the key features that contribute to the outcomes. We argue that these factual reasoning-based explanations cannot answer critical what-if questions: "What would happen to the GNN's decision if we were to alter the code graph into alternative structures?" Inspired by advancements of counterfactual reasoning in artificial intelligence, we propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection. Unlike factual reasoning-based explainers, CFExplainer seeks the minimal perturbation to the input code graph that leads to a change in the prediction, thereby addressing the what-if questions for vulnerability detection. We term this perturbation a counterfactual explanation, which can pinpoint the root causes of the detected vulnerability and furnish valuable insights for developers to undertake appropriate actions for fixing the vulnerability. Extensive experiments on four GNN-based vulnerability detection models demonstrate the effectiveness of CFExplainer over existing state-of-the-art factual reasoning-based explainers.
引用
收藏
页码:389 / 401
页数:13
相关论文
共 50 条
  • [1] Learning Counterfactual Explanation of Graph Neural Networks via Generative Flow Network
    He K.
    Liu L.
    Zhang Y.
    Wang Y.
    Liu Q.
    Wang G.
    IEEE Transactions on Artificial Intelligence, 2024, 5 (09): : 1 - 13
  • [2] ReGVD: Revisiting Graph Neural Networks for Vulnerability Detection
    Van-Anh Nguyen
    Dai Quoc Nguyen
    Van Nguyen
    Trung Le
    Quan Hung Tran
    Dinh Phung
    2022 ACM/IEEE 44TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING: COMPANION PROCEEDINGS (ICSE-COMPANION 2022), 2022, : 178 - 182
  • [3] Using Graph Neural Networks for the Detection and Explanation of Network Intrusions
    Baahmed, Ahmed Rafik El-Mehdi
    Andresini, Giuseppina
    Robardet, Celine
    Appice, Annalisa
    MACHINE LEARNING AND PRINCIPLES AND PRACTICE OF KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2023, PT III, 2025, 2135 : 201 - 216
  • [4] Robust Counterfactual Explanations on Graph Neural Networks
    Bajaj, Mohit
    Chu, Lingyang
    Xue, Zi Yu
    Pei, Jian
    Wang, Lanjun
    Lam, Peter Cho-Ho
    Zhang, Yong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [5] Towards Fair Graph Neural Networks via Graph Counterfactual
    Guo, Zhimeng
    Li, Jialiang
    Xiao, Teng
    Ma, Yao
    Wang, Suhang
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 669 - 678
  • [6] Comparison of Program Representations on Vulnerability Detection with Graph Neural Networks
    Choi Y.
    Kwon Y.-W.
    IEIE Transactions on Smart Processing and Computing, 2021, 10 (06): : 477 - 482
  • [7] Smart Contract Vulnerability Detection Using Graph Neural Networks
    Zhuang, Yuan
    Liu, Zhenguang
    Qian, Peng
    Liu, Qi
    Wang, Xiang
    He, Qinming
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3283 - 3290
  • [8] Causal Explanation of Graph Neural Networks
    Debbi, Hichem
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2024, PT I, 2025, 15346 : 277 - 288
  • [9] Counterfactual based reinforcement learning for graph neural networks
    Pham, David
    Zhang, Yongfeng
    ANNALS OF OPERATIONS RESEARCH, 2022,
  • [10] Combine sliced joint graph with graph neural networks for smart contract vulnerability detection?
    Cai, Jie
    Li, Bin
    Zhang, Jiale
    Sun, Xiaobing
    Chen, Bing
    JOURNAL OF SYSTEMS AND SOFTWARE, 2023, 195