Locally differentially private graph learning on decentralized social graph

被引:0
作者
Zhang, Guanhong [1 ,2 ]
Cheng, Xiang [1 ,2 ]
Pan, Jiaan [1 ,2 ]
Lin, Zihan [1 ]
He, Zhaofeng [1 ]
机构
[1] Beijing Univ Posts & Telecommun, 10 Xitucheng Rd, Beijing 100876, Peoples R China
[2] State Key Lab Networking & Switching Technol, 10 Xitucheng Rd, Beijing 100876, Peoples R China
基金
中国国家自然科学基金;
关键词
Social network; Privacy protection; Graph neural network; Local differential privacy;
D O I
10.1016/j.knosys.2024.112488
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years, decentralized social networks have gained increasing attention, where each client maintains a local view of a social graph. To provide services based on graph learning in such networks, the server commonly needs to collect the local views of the graph structure, which raises privacy issues. In this paper, we focus on learning graph neural networks (GNNs) on decentralized social graphs while satisfying local differential privacy (LDP). Most existing methods collect high-dimensional local views under LDP through Randomized Response, which introduces a large amount of noise and significantly decreases the usability of the collected graph structure for training GNNs. To address this problem, we present Structure Learning-based Locally Private Graph Learning (SL-LPGL). Its main idea is to first collect low-dimensional encoded structural information called cluster degree vectors to reduce the amount of LDP noise, then learn a high-dimensional graph structure from the cluster degree vectors via graph structure learning (GSL) to train GNNs. In SL-LPGL, we propose a Homophily-aware Graph StructurE Initialization (HAGEI) method to provide a low-noise initial graph structure as learning guidance for GSL. We then introduce an Estimated Average Degree Vector Enhanced Graph Structure Learning (EADEGSL) method to further mitigate the negative impact of LDP noise in GSL. We conduct experiments on four real-world graph datasets. The experimental results demonstrate that SL-LPGL outperforms the baselines.
引用
收藏
页数:14
相关论文
共 48 条
  • [1] Aiello W., 2000, Proceedings of the Thirty Second Annual ACM Symposium on Theory of Computing, P171, DOI 10.1145/335305.335326
  • [2] The Johnson-Lindenstrauss Transform Itself Preserves Differential Privacy
    Blocki, Jeremiah
    Blum, Avrim
    Datta, Anupam
    Sheffet, Or
    [J]. 2012 IEEE 53RD ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2012, : 410 - 419
  • [3] Chen Yu, 2020, Iterative deep graph learning, V33
  • [4] Cheng J., 2022, PMLR, P3835
  • [5] Publishing Graph Degree Distribution with Node Differential Privacy
    Day, Wei-Yen
    Li, Ninghui
    Lyu, Min
    [J]. SIGMOD'16: PROCEEDINGS OF THE 2016 INTERNATIONAL CONFERENCE ON MANAGEMENT OF DATA, 2016, : 123 - 138
  • [6] Local Privacy and Statistical Minimax Rates
    Duchi, John C.
    Jordan, Michael I.
    Wainwright, Martin J.
    [J]. 2013 IEEE 54TH ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS), 2013, : 429 - 438
  • [7] Dwork C, 2006, LECT NOTES COMPUT SC, V4052, P1
  • [8] Epasto A., 2022, Advances in Neural Information Processing Systems, V35, P22617
  • [9] Accurate Estimation of the Degree Distribution of Private Networks
    Hay, Michael
    Li, Chao
    Miklau, Gerome
    Jensen, David
    [J]. 2009 9TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, 2009, : 169 - 178
  • [10] Hidano S, 2024, Arxiv, DOI arXiv:2202.10209