A realistic model extraction attack against graph neural networks

被引:2
作者
Guan, Faqian [1 ]
Zhu, Tianqing [2 ]
Tong, Hanjin [1 ]
Zhou, Wanlei [2 ]
机构
[1] China Univ Geosci Wuhan, Sch Comp Sci, Wuhan 430074, Hubei, Peoples R China
[2] City Univ Macau, Fac Data Sci, Macau 999078, Peoples R China
关键词
Black-box; Fewer queries; Graph neural networks; Incorrect labels; Model extraction attack;
D O I
10.1016/j.knosys.2024.112144
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Model extraction attacks are considered to be a significant avenue of vulnerability in machine learning. In model extraction attacks, the attacker repeatedly queries a victim model so as to train a surrogate model that can mimic the output of the victim model. Graph neural networks (GNNs), which are designed to process graph data, were previously thought to be less sensitive to such attacks. This is because, in black -box settings, attackers only have limited access to the victim model. Also, the number of queries any one user can make within a given time window is usually restricted and within this finite number of responses some of the information may contain errors. Yet training a useful surrogate model not only requires a substantial number of queries, but incorrect node labels appearing in the victim GNN's responses is highly problematic. However, in this paper, we demonstrate that GNNs may have a similar vulnerability to model extraction attacks as a normal machine learning model. Our proposed method of extraction addresses the issue of incorrect node labels while also significantly reducing the number of required queries required to train a well -performing model. With this method, GNN extraction attacks are actually highly practical in the real world. Specifically, our proposed methodology incorporates an edge prediction module that introduces potential edges into the original graph data. This module links incorrectly labeled nodes with more accurately labeled ones, thereby mitigating the impact of incorrect labels. And, by increasing the number of possible edges, our approach enables the surrogate model to better leverage the graph's structure, enhancing the contribution of the labeled nodes and allowing the model extraction attack to be executed with fewer queries. Our experiments demonstrate a significant performance improvement over existing approaches, especially in a black -box setting. As such, this research shows that GNNs are also vulnerable to model extraction attacks in real -world scenarios.
引用
收藏
页数:14
相关论文
共 44 条
  • [21] Mikolov T., 2013, Advances in neural information processing systems, DOI DOI 10.5555/2999792.2999959
  • [22] Knockoff Nets: Stealing Functionality of Black-Box Models
    Orekondy, Tribhuvanesh
    Schiele, Bernt
    Fritz, Mario
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4949 - 4958
  • [23] The Graph Neural Network Model
    Scarselli, Franco
    Gori, Marco
    Tsoi, Ah Chung
    Hagenbuchner, Markus
    Monfardini, Gabriele
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (01): : 61 - 80
  • [24] Shchur O., 2018, ARXIV
  • [25] Shen Y, 2022, P IEEE S SECUR PRIV, P1175, DOI [10.1109/SP46214.2022.00060, 10.1109/SP46214.2022.9833607]
  • [26] Membership Inference Attacks Against Machine Learning Models
    Shokri, Reza
    Stronati, Marco
    Song, Congzheng
    Shmatikov, Vitaly
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 3 - 18
  • [27] Adversarial Attack and Defense on Graph Data: A Survey
    Sun, Lichao
    Dou, Yingtong
    Yang, Carl
    Zhang, Kai
    Wang, Ji
    Yu, Philip S.
    He, Lifang
    Li, Bo
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2023, 35 (08) : 7693 - 7711
  • [28] Attribute disclosure risk for k-anonymity: the case of numerical data
    Torra, Vicenc
    Navarro-Arribas, Guillermo
    [J]. INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2023, 22 (06) : 2015 - 2024
  • [29] Tramèr F, 2016, PROCEEDINGS OF THE 25TH USENIX SECURITY SYMPOSIUM, P601
  • [30] A Novel Cross-Network Embedding for Anchor Link Prediction with Social Adversarial Attacks
    Wang, Huanran
    Yang, Wu
    Wang, Wei
    Man, Dapeng
    Lv, Jiguang
    [J]. ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2023, 26 (01)