A PATE-based Approach for Training Graph Neural Networks under Label Differential Privacy

被引:0
作者
Huang, Heyuan [1 ]
Luo, Liwei [1 ]
Zhang, Bingbing [1 ]
Xie, Yankai [1 ]
Zhang, Chi [1 ]
Liu, Jianqing [2 ]
机构
[1] Univ Sci & Technol China, Sch Cyberspace Sci & Technol, Hefei, Anhui, Peoples R China
[2] North Carolina State Univ, Dept Comp Sci, Raleigh, NC USA
来源
IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM | 2023年
关键词
Graph Neural Networks; Differential Privacy; Label Differential Privacy;
D O I
10.1109/GLOBECOM54140.2023.10437079
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
As a standard solution to the problem of private deep learning, differential privacy (DP) is widely used in graph neural networks (GNNs) to protect sensitive information about the input graph data. However, most existing DP algorithms for GNNs protect the privacy of every attribute for each node. This results in the need for injecting a large amount of noise, making these methods significantly underperform their nonprivate counterparts. We argue that in some practical scenarios, node labels serve as the only or the most sensitive attribute, where label differential privacy, a more fine-grained notion of differential privacy that only protects the labels is more appropriate. To better capture these scenarios and improve the trade-off between data privacy and model accuracy, we propose a novel method of training GNNs under label differential privacy. Instead of naively adding noise to the node labels before training the GNN, our method follows the strategy of Private Aggregation of Teacher Ensembles (PATE) to generate differentially private node labels with both high accuracy and strong privacy guarantee. We also propose a label denoising module that takes advantage of the graph structure to further improve the accuracy of the trained model. Additionally, our method is model-agnostic, making it applicable to any GNN architecture. We evaluate its performance on two commonly used benchmark datasets and demonstrate its capability to learn high-performance models while ensuring privacy.
引用
收藏
页码:3427 / 3432
页数:6
相关论文
共 21 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Daigavane Ameya, 2021, NODE LEVEL DIFFERENT
  • [3] Duddu V., 2020, MOBIQUITOUS EAI INT
  • [4] The Algorithmic Foundations of Differential Privacy
    Dwork, Cynthia
    Roth, Aaron
    [J]. FOUNDATIONS AND TRENDS IN THEORETICAL COMPUTER SCIENCE, 2013, 9 (3-4): : 211 - 406
  • [5] Esfandiari H, 2022, PR MACH LEARN RES, V151
  • [6] Esmaeili M. M., 2021, Advances in Neural Information Processing Systems
  • [7] Wenzhi Fan, 2021, IEEE Transactions on Geoscience and Remote Sensing, V59, P76, DOI [10.1109/TKDE.2020.3008732, 10.1109/TGRS.2020.2990791]
  • [8] Ghazi B, 2021, ADV NEUR IN
  • [9] Huang Q., 2020, Combining Label Propagation and Simple Models out-performs Graph Neural Networks
  • [10] Kipf T. N., 2017, ICLR, P1