Adversarial attacks against dynamic graph neural networks via node injection

被引:3
|
作者
Jiang, Yanan [1 ]
Xia, Hui [1 ]
机构
[1] Ocean Univ China, Sch Comp Sci & Technol, Qingdao 266100, Peoples R China
来源
HIGH-CONFIDENCE COMPUTING | 2024年 / 4卷 / 01期
基金
中国国家自然科学基金;
关键词
Dynamic graph neural network; Adversarial attack; Malicious node; Vulnerability;
D O I
10.1016/j.hcc.2023.100185
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Dynamic graph neural networks (DGNNs) have demonstrated their extraordinary value in many practical applications. Nevertheless, the vulnerability of DNNs is a serious hidden danger as a small disturbance added to the model can markedly reduce its performance. At the same time, current adversarial attack schemes are implemented on static graphs, and the variability of attack models prevents these schemes from transferring to dynamic graphs. In this paper, we use the diffused attack of node injection to attack the DGNNs, and first propose the node injection attack based on structural fragility against DGNNs, named Structural Fragility-based Dynamic Graph Node Injection Attack (SFIA). SFIA firstly determines the target time based on the period weight. Then, it introduces a structural fragile edge selection strategy to establish the target nodes set and link them with the malicious node using serial inject. Finally, an optimization function is designed to generate adversarial features for malicious nodes. Experiments on datasets from four different fields show that SFIA is significantly superior to many comparative approaches. When the graph is injected with 1% of the original total number of nodes through SFIA, the link prediction Recall and MRR of the target DGNN link decrease by 17.4% and 14.3% respectively, and the accuracy of node classification decreases by 8.7%. (c) 2023 The Author(s). Published by Elsevier B.V. on behalf of Shandong University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页数:9
相关论文
共 50 条
  • [21] Exploratory Adversarial Attacks on Graph Neural Networks for Semi-Supervised Node Classification
    Lin, Xixun
    Zhou, Chuan
    Wu, Jia
    Yang, Hong
    Wang, Haibo
    Cao, Yanan
    Wang, Bin
    PATTERN RECOGNITION, 2023, 133
  • [22] Revisiting Adversarial Attacks on Graph Neural Networks for Graph Classification
    Wang, Xin
    Chang, Heng
    Xie, Beini
    Bian, Tian
    Zhou, Shiji
    Wang, Daixin
    Zhang, Zhiqiang
    Zhu, Wenwu
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (05) : 2166 - 2178
  • [23] TOPOLOGICAL ADVERSARIAL ATTACKS ON GRAPH NEURAL NETWORKS VIA PROJECTED META LEARNING
    Aburidi, Mohammed
    Marcia, Roummel
    IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS 2024, IEEE EAIS 2024, 2024, : 330 - 337
  • [24] Adversarial Attacks on Graph Neural Networks: Perturbations and their Patterns
    Zuegner, Daniel
    Borchert, Oliver
    Akbarnejad, Amir
    Guennemann, Stephan
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2020, 14 (05)
  • [25] DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Ioannidis, Vassilis N.
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8469 - 8473
  • [26] Robust Graph Convolutional Networks Against Adversarial Attacks
    Zhu, Dingyuan
    Zhang, Ziwei
    Cui, Peng
    Zhu, Wenwu
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1399 - 1407
  • [27] Spectral adversarial attack on graph via node injection
    Ou, Weihua
    Yao, Yi
    Xiong, Jiahao
    Wu, Yunshun
    Deng, Xianjun
    Gou, Jianping
    Chen, Jiamin
    JOURNAL OF ENVIRONMENTAL CHEMICAL ENGINEERING, 2025, 13 (01):
  • [28] A Dual Robust Graph Neural Network Against Graph Adversarial Attacks
    Tao, Qian
    Liao, Jianpeng
    Zhang, Enze
    Li, Lusi
    NEURAL NETWORKS, 2024, 175
  • [29] GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections
    Fang, Junyuan
    Wen, Haixian
    Wu, Jiajing
    Xuan, Qi
    Zheng, Zibin
    Tse, Chi K.
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2024, 11 (04) : 5374 - 5387
  • [30] Inference Attacks Against Graph Neural Networks
    Zhang, Zhikun
    Chen, Min
    Backes, Michael
    Shen, Yun
    Zhang, Yang
    PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 4543 - 4560