Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks

被引:0
|
作者
Liu, Renyang [1 ]
Zhou, Wei [1 ]
Zhang, Jinhong [1 ]
Liu, Xiaoyuan [2 ]
Si, Peiyuan [3 ]
Li, Haoran [1 ]
机构
[1] Yunnan Univ, Kunming, Yunnan, Peoples R China
[2] Univ Elect Sci & Technol China, Chengdu, Sichuan, Peoples R China
[3] Nanyang Technol Univ, Singapore, Singapore
来源
SECURITY AND PRIVACY IN COMMUNICATION NETWORKS, PT I, SECURECOMM 2023 | 2025年 / 567卷
基金
中国国家自然科学基金;
关键词
Model Inversion Attack; Adversarial Attack; Graph Neural Network; Graph Representation Learning; Network Communication;
D O I
10.1007/978-3-031-64948-6_7
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, Graph Neural Networks (GNNs), including Homogeneous Graph Neural Networks (HomoGNNs) and Heterogeneous Graph Neural Networks (HeteGNNs), have made remarkable progress in many physical scenarios, especially in communication applications. Despite achieving great success, the privacy issue of such models has also received considerable attention. Previous studies have shown that given a well-fitted target GNN, the attacker can reconstruct the sensitive training graph of this model via model inversion attacks, leading to significant privacy worries for the AI service provider. We advocate that the vulnerability comes from the target GNN itself and the prior knowledge about the shared properties in real-world graphs. Inspired by this, we propose a novel model inversion attack method on HomoGNNs and HeteGNNs, namely HomoGMI and HeteGMI. Specifically, HomoGMI and HeteGMI are gradient-descent-based optimization methods that aim to maximize the cross-entropy loss on the target GNN and the 1(st) and 2(nd)-order proximities on the reconstructed graph. Notably, to the best of our knowledge, HeteGMI is the first attempt to perform model inversion attacks on HeteGNNs. Extensive experiments on multiple benchmarks demonstrate that the proposed method can achieve better performance than the competitors.
引用
收藏
页码:125 / 144
页数:20
相关论文
共 50 条
  • [31] FedHGN: A Federated Framework for Heterogeneous Graph Neural Networks
    Fu, Xinyu
    King, Irwin
    PROCEEDINGS OF THE THIRTY-SECOND INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2023, 2023, : 3705 - 3713
  • [32] Exploring Multiple Hypergraphs for Heterogeneous Graph Neural Networks
    Wang, Ying
    Li, Yingji
    Wu, Yue
    Wang, Xin
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 236
  • [33] Datasets and Interfaces for Benchmarking Heterogeneous Graph Neural Networks
    Liu, Yijian
    Zhang, Hongyi
    Yang, Cheng
    Li, Ao
    Ji, Yugang
    Zhang, Luhao
    Li, Tao
    Yang, Jinyu
    Zhao, Tianyu
    Yang, Juan
    Huang, Hai
    Shi, Chuan
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 5346 - 5350
  • [34] Heterogeneous Graph Neural Networks for Malicious Account Detection
    Liu, Ziqi
    Chen, Chaochao
    Yang, Xinxing
    Zhou, Jun
    Li, Xiaolong
    Song, Le
    CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, : 2077 - 2085
  • [35] Amalgamating Knowledge from Heterogeneous Graph Neural Networks
    Jing, Yongcheng
    Yang, Yiding
    Wang, Xinchao
    Song, Mingli
    Tao, Dacheng
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15704 - 15713
  • [36] Interpretable Graph Neural Networks for Heterogeneous Tabular Data
    Alkhatib, Amr
    Bostrom, Henrik
    DISCOVERY SCIENCE, DS 2024, PT I, 2025, 15243 : 310 - 324
  • [37] Artist Similarity Based on Heterogeneous Graph Neural Networks
    da Silva, Angelo Cesar Mendes
    Silva, Diego Furtado
    Marcacini, Ricardo Marcondes
    IEEE-ACM TRANSACTIONS ON AUDIO SPEECH AND LANGUAGE PROCESSING, 2024, 32 : 3717 - 3729
  • [38] Single-node attacks for fooling graph neural networks
    Finkelshtein, Ben
    Baskin, Chaim
    Zheltonozhskii, Evgenii
    Alon, Uri
    NEUROCOMPUTING, 2022, 513 : 1 - 12
  • [39] UnboundAttack: Generating Unbounded Adversarial Attacks to Graph Neural Networks
    Ennadir, Sofiane
    Alkhatib, Amr
    Nikolentzos, Giannis
    Vazirgiannis, Michalis
    Bostrom, Henrik
    COMPLEX NETWORKS & THEIR APPLICATIONS XII, VOL 1, COMPLEX NETWORKS 2023, 2024, 1141 : 100 - 111
  • [40] Backdoor Attacks on Graph Neural Networks Trained with Data Augmentation
    Yashiki, Shingo
    Takahashi, Chako
    Suzuki, Koutarou
    IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2024, E107A (03) : 355 - 358