Reinforcement learning-based secure training for adversarial defense in graph neural networks

被引:1
作者
An, Dongdong [1 ]
Yang, Yi [1 ]
Gao, Xin [1 ]
Qi, Hongda [1 ]
Yang, Yang [2 ]
Ye, Xin [3 ]
Li, Maozhen [4 ]
Zhao, Qin [1 ]
机构
[1] Shanghai Normal Univ, Shanghai Engn Res Ctr Intelligent Educ & Big data, 100 Guilin Rd, Shanghai 200234, Peoples R China
[2] East China Normal Univ, Natl Trusted Embedded Software Engn Technol Res Ct, Shanghai 200062, Peoples R China
[3] Harbin Inst Technol Shenzhen, Sch Sci, Shenzhen 518055, Peoples R China
[4] Brunel Univ London, Dept Elect & Comp Engn, Kingston Lane, Uxbridge UB8 3PH, Middx, England
基金
中国国家自然科学基金;
关键词
Graph neural network; Deep reinforcement learning; Formal verification; Adversarial defense;
D O I
10.1016/j.neucom.2025.129704
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The security of Graph Neural Networks (GNNs) is crucial for ensuring the reliability and protection of the systems they are integrated within real-world applications. However, current approaches lack the ability to prevent GNNs from learning high-risk information, including edges, nodes, convolutions, etc. In this paper, we propose a secure GNN learning framework called Reinforcement Learning-based Secure Training Algorithm. We first introduce a model conversion technique that transforms the training process of GNNs into a verifiable Markov Decision Process model. To maintain the security of model we employ Deep Q-Learning algorithm to prevent high-risk information messages. Additionally, to verify whether the strategy derived from Deep QLearning algorithm meets safety requirements, we design a model transformation algorithm that converts MDPs into probabilistic verification models, thereby ensuring our method's security through formal verification tools. The effectiveness and feasibility of our proposed method are demonstrated by achieving a 6.4% improvement in average accuracy on open-source datasets under adversarial attack graphs.
引用
收藏
页数:14
相关论文
共 42 条
[31]   Multiview-Ensemble-Learning-Based Robust Graph Convolutional Networks Against Adversarial Attacks [J].
Wu, Tao ;
Luo, Junhui ;
Qiao, Shaojie ;
Wang, Chao ;
Yuan, Lin ;
Pu, Xiao ;
Xian, Xingping .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (16) :27700-27714
[32]   ERGCN: Data enhancement-based robust graph convolutional network against adversarial attacks [J].
Wu, Tao ;
Yang, Nan ;
Chen, Long ;
Xiao, Xiaokui ;
Xian, Xingping ;
Liu, Jun ;
Qiao, Shaojie ;
Cui, Canyixing .
INFORMATION SCIENCES, 2022, 617 :234-253
[33]   DeepEC: Adversarial attacks against graph structure prediction models [J].
Xian, Xingping ;
Wu, Tao ;
Qiao, Shaojie ;
Wang, Wei ;
Wang, Chao ;
Liu, Yanbing ;
Xu, Guangxia .
NEUROCOMPUTING, 2021, 437 :168-185
[34]  
Xu KD, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P3961
[35]   INFORMATION-FLOW MODEL FOR CONFLICT AND FISSION IN SMALL-GROUPS [J].
ZACHARY, WW .
JOURNAL OF ANTHROPOLOGICAL RESEARCH, 1977, 33 (04) :452-473
[36]   DefenseVGAE: Defending Against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder [J].
Zhang, Ao ;
Ma, Jinwen .
ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT IV, ICIC 2024, 2024, 14865 :313-324
[37]   Learning safe neural network controllers with barrier certificates [J].
Zhao, Hengjun ;
Zeng, Xia ;
Chen, Taolue ;
Liu, Zhiming ;
Woodcock, Jim .
FORMAL ASPECTS OF COMPUTING, 2021, 33 (03) :437-455
[38]  
Zhao K., 2022, Joint learning of E-commerce search and recommendation with a unified graph neural network, P1461
[39]  
Zhao Q., 2023, P 37 INT C NEUR INF, P3338
[40]   A Multiinterest and Social Interest-Field Framework for Financial Security [J].
Zhao, Qin ;
Huang, Jingyi ;
Liu, Gang ;
Miao, Yaru ;
Wang, Pengwei .
IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (02) :1685-1695