Graph Neural Networks for Vulnerability Detection: A Counterfactual Explanation

被引:3
|
作者
Chu, Zhaoyang [1 ,6 ]
Wan, Yao [1 ,6 ]
Li, Qian [2 ]
Wu, Yang [1 ,6 ]
Zhang, Hongyu [3 ]
Sui, Yulei [4 ]
Xu, Guandong [5 ]
Jin, Hai [1 ,6 ]
机构
[1] Huazhong Univ Sci & Technol, Sch Comp Sci & Technol, Wuhan, Peoples R China
[2] Curtin Univ, Sch Elect Engn Comp & Math Sci, Perth, WA, Australia
[3] Chongqing Univ, Sch Big Data & Software Engn, Chongqing, Peoples R China
[4] Univ New South Wales, Sch Comp Sci & Engn, Kensington, NSW, Australia
[5] Univ Technol Sydney, Sch Comp Sci, Ultimo, Australia
[6] Huazhong Univ Sci & Technol, Natl Engn Res Ctr Big Data Technol & Syst, Serv Comp Technol & Syst Lab, Cluster & Grid Comp Lab, Wuhan 430074, Peoples R China
来源
PROCEEDINGS OF THE 33RD ACM SIGSOFT INTERNATIONAL SYMPOSIUM ON SOFTWARE TESTING AND ANALYSIS, ISSTA 2024 | 2024年
关键词
Vulnerability detection; graph neural networks; model explainability; counterfactual reasoning; what-if analysis;
D O I
10.1145/3650212.3652136
中图分类号
学科分类号
摘要
Vulnerability detection is crucial for ensuring the security and reliability of software systems. Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection, owing to their ability to capture the underlying semantic structure of source code. However, GNNs face significant challenges in explainability due to their inherently black-box nature. To this end, several factual reasoning-based explainers have been proposed. These explainers provide explanations for the predictions made by GNNs by analyzing the key features that contribute to the outcomes. We argue that these factual reasoning-based explanations cannot answer critical what-if questions: "What would happen to the GNN's decision if we were to alter the code graph into alternative structures?" Inspired by advancements of counterfactual reasoning in artificial intelligence, we propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection. Unlike factual reasoning-based explainers, CFExplainer seeks the minimal perturbation to the input code graph that leads to a change in the prediction, thereby addressing the what-if questions for vulnerability detection. We term this perturbation a counterfactual explanation, which can pinpoint the root causes of the detected vulnerability and furnish valuable insights for developers to undertake appropriate actions for fixing the vulnerability. Extensive experiments on four GNN-based vulnerability detection models demonstrate the effectiveness of CFExplainer over existing state-of-the-art factual reasoning-based explainers.
引用
收藏
页码:389 / 401
页数:13
相关论文
共 50 条
  • [11] Heterogeneous graph neural networks for fraud detection and explanation in supply chain finance
    Wu, Bin
    Chao, Kuo-Ming
    Li, Yinsheng
    INFORMATION SYSTEMS, 2024, 121
  • [12] Global explanation supervision for Graph Neural Networks
    Etemadyrad, Negar
    Gao, Yuyang
    Dinakarrao, Sai Manoj Pudukotai
    Zhao, Liang
    FRONTIERS IN BIG DATA, 2024, 7
  • [13] On Structural Explanation of Bias in Graph Neural Networks
    Dong, Yushun
    Wang, Song
    Wang, Yu
    Derr, Tyler
    Li, Jundong
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 316 - 326
  • [14] Counterfactual Graph Learning for Anomaly Detection on Attributed Networks
    Xiao, Chunjing
    Xu, Xovee
    Lei, Yue
    Zhang, Kunpeng
    Liu, Siyuan
    Zhou, Fan
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (10) : 10540 - 10553
  • [15] Explanation-based Graph Neural Networks for Graph Classification
    Seo, Sangwoo
    Jung, Seungjun
    Kim, Changick
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2836 - 2842
  • [16] A dual graph neural networks model using sequence embedding as graph nodes for vulnerability detection
    Ling, Miaogui
    Tang, Mingwei
    Bian, Deng
    Lv, Shixuan
    Tang, Qi
    INFORMATION AND SOFTWARE TECHNOLOGY, 2025, 177
  • [17] CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
    Lucic, Ana
    ter Hoeve, Maartje
    Tolomei, Gabriele
    de Rijke, Maarten
    Silvestri, Fabrizio
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [18] Combining Graph Neural Networks With Expert Knowledge for Smart Contract Vulnerability Detection
    Liu, Zhenguang
    Qian, Peng
    Wang, Xiaoyang
    Zhuang, Yuan
    Qiu, Lin
    Wang, Xun
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (02) : 1296 - 1310
  • [19] LineVD: Statement-level Vulnerability Detection using Graph Neural Networks
    Hin, David
    Kan, Andrey
    Chen, Huaming
    Babar, M. Ali
    2022 MINING SOFTWARE REPOSITORIES CONFERENCE (MSR 2022), 2022, : 596 - 607
  • [20] GRETEL: Graph Counterfactual Explanation Evaluation Framework
    Prado-Romero, Mario Alfonso
    Stilo, Giovanni
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4389 - 4393