RAFLS: RDP-Based Adaptive Federated Learning With Shuffle Model

被引:2
作者
Wang, Shuo [1 ,2 ]
Gai, Keke [1 ,3 ]
Yu, Jing [4 ]
Zhu, Liehuang [1 ]
Wu, Hanghang [5 ,6 ,7 ]
Wei, Changzheng [5 ,6 ,7 ]
Yan, Ying [5 ,6 ,7 ]
Zhang, Hui [5 ,6 ,7 ]
Choo, Kim-Kwang Raymond [8 ]
机构
[1] Beijing Inst Technol, Sch Cyberspace Sci & Technol, Beijing 100081, Peoples R China
[2] Guizhou Univ, State Key Lab Publ Big Data, Guiyang 550025, Peoples R China
[3] Beijing Inst Technol, Yangtze Delta Reg Acad, Jiaxing 314001, Peoples R China
[4] Chinese Acad Sci, Inst Informat Engn, Beijing 100045, Peoples R China
[5] Ant Grp, Blockchain Platform Div, Beijing 100026, Peoples R China
[6] Ant Grp, Blockchain Platform Div, Shanghai 200010, Peoples R China
[7] Ant Grp, Blockchain Platform Div, Hangzhou 310000, Peoples R China
[8] Univ Texas San Antonio, Dept Informat Syst & Cyber Secur, San Antonio, TX 78249 USA
基金
中国国家自然科学基金;
关键词
Adaptation models; Noise; Computational modeling; Privacy; Training; Accuracy; Servers; Federated learning; r & eacute; nyi differential privacy; shuffle model; privacy amplification;
D O I
10.1109/TDSC.2024.3429503
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Federated Learning (FL) realizes distributed machine learning training via sharing model updates rather than raw data, thus ensuring data privacy. However, an attacker may infer the client's local original data from the model parameter so that original data leakage can be caused. While Differential Privacy (DP) is designed to address data leakage issues in FL, injecting noises during training reduces model accuracy. To minimize the negative impact caused by noises on model accuracy while considering privacy protections, in this article we propose an adaptive FL model, entitled RDP-based Adaptive Federated Learning in Shuffle model (RAFLS). To ensure the dataset privacy of clients, we inject adaptive noises into the client's local model by leveraging the adaptive layer-wise adaptive sensitivity of the local model. Our approach shuffles all local model parameters in order to address privacy explosion concerns caused by high-dimensional aggregation and multiple iterations. We further propose a fine-grained model weight aggregation scheme to aggregate all local models and obtain a global model. Our experiment evaluations demonstrate the proposed RAFLS has a better performance than the state-of-the-art methods in reducing noise's impact on model accuracy while protecting data, i.e., showing that the accuracy of RAFLS increases by 1.54% than that of the baseline scheme when epsilon=2.0 and FashionMNIST under IID setting.
引用
收藏
页码:1181 / 1194
页数:14
相关论文
共 37 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Alam S, 2022, ADV NEUR IN
  • [3] Andrew G., 2021, Adv Neural Inf Process Syst, V34, P17455
  • [4] Local Differential Privacy for Deep Learning
    Arachchige, Pathum Chamikara Mahawaga
    Bertok, Peter
    Khalil, Ibrahim
    Liu, Dongxi
    Camtepe, Seyit
    Atiquzzaman, Mohammed
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (07): : 5827 - 5842
  • [5] Dwork C., 2006, AUTOMATA LANGUAGES P, P1
  • [6] Differential Privacy for Deep and Federated Learning: A Survey
    El Ouadrhiri, Ahmed
    Abdelhadi, Ahmed
    [J]. IEEE ACCESS, 2022, 10 : 22359 - 22380
  • [7] Adap DP-FL: Differentially Private Federated Learning with Adaptive Noise
    Fu, Jie
    Chen, Zhili
    Han, Xiao
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 656 - 663
  • [8] Geyer Robin C, 2017, arXiv
  • [9] Hardt M, 2010, ACM S THEORY COMPUT, P705
  • [10] Clustered Federated Learning With Adaptive Local Differential Privacy on Heterogeneous IoT Data
    He, Zaobo
    Wang, Lintao
    Cai, Zhipeng
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (01): : 137 - 146