FedGG: Leveraging Generative Adversarial Networks and Gradient Smoothing for Privacy Protection in Federated Learning

被引:0
作者
Lv, Jiguang [1 ]
Xu, Shuchun [1 ]
Zhan, Xiaodong [2 ]
Liu, Tao [1 ]
Man, Dapeng [1 ]
Yang, Wu [1 ]
机构
[1] Harbin Engn Univ, Harbin, Heilongjiang, Peoples R China
[2] Changan Commun Technol Co Ltd, Beijing, Peoples R China
来源
EURO-PAR 2024: PARALLEL PROCESSING, PART II, EURO-PAR 2024 | 2024年 / 14802卷
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Federated Learning; Privacy Protection; Parallel computing; Generate adversarial networks;
D O I
10.1007/978-3-031-69766-1_27
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Gradient leakage attack allow attackers to infer Privacy data, which raises concerns about data leakage. To solve this problem, a series of methods have been proposed, while previously proposed methods have two weaknesses. First, adding noise (e.g., Differential privacy) to client-shared gradients reduces Privacy data leaks but harms performance of model and leaves room for data recovery attack(e.g., Gradient leak attacks). Second, encrypting shared gradients (e.g., Homomorphic encryption) enhances security but demands high computational costs, making it impractical for resource-constrained edge devices. This work proposes a novel federated learning method that leverages generative adversarial networks and gradient smoothing, which generates pseudodata through Wasserstein GAN(WGAN) and retains classification characteristics. Gradient smoothing can suppress gradients with high frequency changes; To improve the diversity of training data, launching data augmentation by mixup. Experiments show that compared with common defense methods, the MES-I of noise and gradient clipping are 0.5278 and 0.1036, respectively, while the MES-I of FedGG is 0.6422.
引用
收藏
页码:393 / 407
页数:15
相关论文
共 23 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Arjovsky M, 2017, PR MACH LEARN RES, V70
  • [3] Bu ZhiLiang, 2022, 2022 IEEE Smartworld, Ubiquitous Intelligence & Computing, Scalable Computing & Communications, Digital Twin, Privacy Computing, Metaverse, Autonomous & Trusted Vehicles (SmartWorld/UIC/ScalCom/DigitalTwin/PriComp/Meta), P921, DOI 10.1109/SmartWorld-UIC-ATC-ScalCom-DigitalTwin-PriComp-Metaverse56740.2022.00138
  • [4] PDLHR: Privacy-Preserving Deep Learning Model With Homomorphic Re-Encryption in Robot System
    Chen, Yange
    Wang, Baocang
    Zhang, Zhili
    [J]. IEEE SYSTEMS JOURNAL, 2022, 16 (02): : 2032 - 2043
  • [5] Geiping Jonas, 2020, ADV NEURAL INFORM PR, V33
  • [6] Generative Adversarial Networks
    Goodfellow, Ian
    Pouget-Abadie, Jean
    Mirza, Mehdi
    Xu, Bing
    Warde-Farley, David
    Ozair, Sherjil
    Courville, Aaron
    Bengio, Yoshua
    [J]. COMMUNICATIONS OF THE ACM, 2020, 63 (11) : 139 - 144
  • [7] Hu R, 2022, Arxiv, DOI arXiv:2202.07178
  • [8] Jolicoeur-Martineau A, 2018, Arxiv, DOI arXiv:1807.00734
  • [9] Auditing Privacy Defenses in Federated Learning via Generative Gradient Leakage
    Li, Zhuohang
    Zhang, Jiaxin
    Liu, Luyang
    Liu, Jian
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 10122 - 10132
  • [10] Lin YL, 2018, INT CONF SYST SCI EN