Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks

被引:0
|
作者
Liu, Renyang [1 ]
Zhou, Wei [1 ]
Zhang, Jinhong [1 ]
Liu, Xiaoyuan [2 ]
Si, Peiyuan [3 ]
Li, Haoran [1 ]
机构
[1] Yunnan Univ, Kunming, Yunnan, Peoples R China
[2] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
[3] Nanyang Technol Univ, Singapore, Singapore
来源
SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, PT I, SECURECOMM 2023 | 2025年 / 567卷
基金
中国国家自然科学基金;
关键词
Model Inversion Attack; Adversarial Attack; Graph Neural Network; Graph Representation Learning; Network Communication;
D O I
10.1007/978-3-031-64948-6_7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, Graph Neural Networks (GNNs), including Homogeneous Graph Neural Networks (HomoGNNs) and Heterogeneous Graph Neural Networks (HeteGNNs), have made remarkable progress in many physical scenarios, especially in communication applications. Despite achieving great success, the privacy issue of such models has also received considerable attention. Previous studies have shown that given a well-fitted target GNN, the attacker can reconstruct the sensitive training graph of this model via model inversion attacks, leading to significant privacy worries for the AI service provider. We advocate that the vulnerability comes from the target GNN itself and the prior knowledge about the shared properties in real-world graphs. Inspired by this, we propose a novel model inversion attack method on HomoGNNs and HeteGNNs, namely HomoGMI and HeteGMI. Specifically, HomoGMI and HeteGMI are gradient-descent-based optimization methods that aim to maximize the cross-entropy loss on the target GNN and the 1(st) and 2(nd)-order proximities on the reconstructed graph. Notably, to the best of our knowledge, HeteGMI is the first attempt to perform model inversion attacks on HeteGNNs. Extensive experiments on multiple benchmarks demonstrate that the proposed method can achieve better performance than the competitors.
引用
收藏
页码:125 / 144
页数:20
相关论文
共 50 条
  • [1] Model Inversion Attacks Against Graph Neural Networks
    Zhang, Zaixi
    Liu, Qi
    Huang, Zhenya
    Wang, Hao
    Lee, Chee-Kong
    Chen, Enhong
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (09) : 8729 - 8741
  • [2] Robust Heterogeneous Graph Neural Networks against Adversarial Attacks
    Zhang, Mengmei
    Wang, Xiao
    Zhu, Meiqi
    Shi, Chuan
    Zhang, Zhiqiang
    Zhou, Jun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 4363 - 4370
  • [3] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [4] Model Stealing Attacks Against Inductive Graph Neural Networks
    Shen, Yun
    He, Xinlei
    Han, Yufei
    Zhang, Yang
    43RD IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP 2022), 2022, : 1175 - 1192
  • [5] Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realisation
    Wu, Bang
    Yang, Xiangwen
    Pan, Shirui
    Yuan, Xingliang
    ASIA CCS'22: PROCEEDINGS OF THE 2022 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2022, : 337 - 350
  • [6] GNNkeras: A Keras-based library for Graph Neural Networks and homogeneous and heterogeneous graph processing
    Pancino, Niccolo
    Bongini, Pietro
    Scarselli, Franco
    Bianchini, Monica
    SOFTWAREX, 2022, 18
  • [7] Backdoor Attacks to Graph Neural Networks
    Zhang, Zaixi
    Jia, Jinyuan
    Wang, Binghui
    Gong, Neil Zhenqiang
    PROCEEDINGS OF THE 26TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2021, 2021, : 15 - 26
  • [8] Making Watermark Survive Model Extraction Attacks in Graph Neural Networks
    Wang, Haiming
    Zhang, Zhikun
    Chen, Min
    He, Shibo
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 57 - 62
  • [9] Inference Attacks Against Graph Neural Networks
    Zhang, Zhikun
    Chen, Min
    Backes, Michael
    Shen, Yun
    Zhang, Yang
    PROCEEDINGS OF THE 31ST USENIX SECURITY SYMPOSIUM, 2022, : 4543 - 4560
  • [10] Adversarial Attacks on Neural Networks for Graph Data
    Zuegner, Daniel
    Akbarnejad, Amir
    Guennemann, Stephan
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 6246 - 6250