LDP-Fed plus : A robust and privacy-preserving federated learning based classification framework enabled by local differential privacy

被引:2
作者
Wang, Yufeng [1 ]
Zhang, Xu [1 ]
Ma, Jianhua [2 ]
Jin, Qun [3 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Commun & Informat Engn, Nanjing, Peoples R China
[2] Hosei Univ, Fac Comp & Informat Sci, Tokyo, Japan
[3] Waseda Univ, Dept Human Informat & Cognit Sci, Tokyo, Japan
关键词
differential privacy; federated learning; model security; privacy protection; QUALITY; SECURE;
D O I
10.1002/cpe.7429
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
As a distributed learning framework, Federated Learning (FL) allows different local learners/participants to collaboratively train a joint model without exposing their own local data, and offers a feasible solution to legally resolve data islands. However, among them, the data privacy and model security are two challenges. The former means that, if original data are used for trained FL models, various methods can be used to deduce the original data samples, thereby causing the leakage of data. The latter implies that unreliable/malicious participants may affect or destroy the joint FL model, through uploading wrong local model parameters. Therefore, this paper proposes a novel distributed FL training framework, namely LDP-Fed+, which takes into account differential privacy protection and model security defense. Specifically, firstly, a local perturbation module is added at the local learner side, which perturbs the original data of local learners through feature extraction, binary encoding and decoding, and random response. Then, through using the perturbed data, local neural network model is trained to obtain the network parameters that meet local differential protection, to effectively deal with model inversion attacks. Secondly, a security defense module is added on the server side, which uses the auxiliary model and differential index mechanism to select an appropriate number of local disturbance parameters for aggregation to enhance model security defense and deal with membership inference attacks. The experimental results show that, compared with other federated learning models based on differential privacy, LDP-Fed+ has stronger robustness for model security and higher accuracy for model training while ensuring strict privacy protection.
引用
收藏
页数:19
相关论文
共 47 条
  • [1] Local Differential Privacy for Deep Learning
    Arachchige, Pathum Chamikara Mahawaga
    Bertok, Peter
    Khalil, Ibrahim
    Liu, Dongxi
    Camtepe, Seyit
    Atiquzzaman, Mohammed
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 5827 - 5842
  • [2] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [3] Concentrated Differential Privacy: Simplifications, Extensions, and Lower Bounds
    Bun, Mark
    Steinke, Thomas
    [J]. THEORY OF CRYPTOGRAPHY, TCC 2016-B, PT I, 2016, 9985 : 635 - 658
  • [4] Differentially Private High-Dimensional Data Publication via Sampling-Based Inference
    Chen, Rui
    Xiao, Qian
    Zhang, Yu
    Xu, Jianliang
    [J]. KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 129 - 138
  • [5] Cynthia D., 2008, PROC INT C THEORY AP, V1, P19
  • [6] RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response
    Erlingsson, Ulfar
    Pihur, Vasyl
    Korolova, Aleksandra
    [J]. CCS'14: PROCEEDINGS OF THE 21ST ACM CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2014, : 1054 - 1067
  • [7] Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
    Fredrikson, Matt
    Jha, Somesh
    Ristenpart, Thomas
    [J]. CCS'15: PROCEEDINGS OF THE 22ND ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2015, : 1322 - 1333
  • [8] Gao Zhi-qiang, 2018, Computer Engineering and Science, V40, P1029, DOI 10.3969/j.issn.1007-130X.2018.06.010
  • [9] Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
    Hitaj, Briland
    Ateniese, Giuseppe
    Perez-Cruz, Fernando
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 603 - 618
  • [10] Differentially private locality sensitive hashing based federated recommender system
    Hu, Hongsheng
    Dobbie, Gillian
    Salcic, Zoran
    Liu, Meng
    Zhang, Jianbing
    Lyu, Lingjuan
    Zhang, Xuyun
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (14)