Reinforcement learning-based secure training for adversarial defense in graph neural networks

被引:0
作者
An, Dongdong [1 ]
Yang, Yi [1 ]
Gao, Xin [1 ]
Qi, Hongda [1 ]
Yang, Yang [2 ]
Ye, Xin [3 ]
Li, Maozhen [4 ]
Zhao, Qin [1 ]
机构
[1] Shanghai Normal Univ, Shanghai Engn Res Ctr Intelligent Educ & Big data, 100 Guilin Rd, Shanghai 200234, Peoples R China
[2] East China Normal Univ, Natl Trusted Embedded Software Engn Technol Res Ct, Shanghai 200062, Peoples R China
[3] Harbin Inst Technol Shenzhen, Sch Sci, Shenzhen 518055, Peoples R China
[4] Brunel Univ London, Dept Elect & Comp Engn, Kingston Lane, Uxbridge UB8 3PH, Middx, England
基金
中国国家自然科学基金;
关键词
Graph neural network; Deep reinforcement learning; Formal verification; Adversarial defense;
D O I
10.1016/j.neucom.2025.129704
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The security of Graph Neural Networks (GNNs) is crucial for ensuring the reliability and protection of the systems they are integrated within real-world applications. However, current approaches lack the ability to prevent GNNs from learning high-risk information, including edges, nodes, convolutions, etc. In this paper, we propose a secure GNN learning framework called Reinforcement Learning-based Secure Training Algorithm. We first introduce a model conversion technique that transforms the training process of GNNs into a verifiable Markov Decision Process model. To maintain the security of model we employ Deep Q-Learning algorithm to prevent high-risk information messages. Additionally, to verify whether the strategy derived from Deep QLearning algorithm meets safety requirements, we design a model transformation algorithm that converts MDPs into probabilistic verification models, thereby ensuring our method's security through formal verification tools. The effectiveness and feasibility of our proposed method are demonstrated by achieving a 6.4% improvement in average accuracy on open-source datasets under adversarial attack graphs.
引用
收藏
页数:14
相关论文
共 42 条
  • [1] Allen GI, 2023, Arxiv, DOI [arXiv:2308.01475, 10.48550/ARXIV.2308.01475, DOI 10.48550/ARXIV.2308.01475]
  • [2] Chen L., 2021, Understanding Structural Vulnerability in Graph Convolutional Networks, P2249, DOI [10.24963/ijcai.2021/310, DOI 10.24963/IJCAI.2021/310]
  • [3] Dai HJ, 2018, PR MACH LEARN RES, V80
  • [4] Formal Verification of Piece-Wise Linear Feed-Forward Neural Networks
    Ehlers, Ruediger
    [J]. AUTOMATED TECHNOLOGY FOR VERIFICATION AND ANALYSIS (ATVA 2017), 2017, 10482 : 269 - 286
  • [5] Ennadir S., 38 AAAI C ART INT AA
  • [6] Metapath-guided Heterogeneous Graph Neural Network for Intent Recommendation
    Fan, Shaohua
    Zhu, Junxiong
    Han, Xiaotian
    Shi, Chuan
    Hu, Linmei
    Ma, Biyu
    Li, Yongliang
    [J]. KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 2478 - 2486
  • [7] Goodfellow I.J., 2015, INT C LEARN REPR
  • [8] Gosch L., ADV NEURAL INFORM PR
  • [9] A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability?
    Huang, Xiaowei
    Kroening, Daniel
    Ruan, Wenjie
    Sharp, James
    Sun, Youcheng
    Thamo, Emese
    Wu, Min
    Yi, Xinping
    [J]. COMPUTER SCIENCE REVIEW, 2020, 37
  • [10] Efficient and Stable Graph Scattering Transforms via Pruning
    Ioannidis, Vassilis N.
    Chen, Siheng
    Giannakis, Georgios B.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (03) : 1232 - 1246