Gradient leakage attacks in federated learning

被引:4
|
作者
Gong, Haimei [1 ,2 ]
Jiang, Liangjun [1 ]
Liu, Xiaoyang [1 ]
Wang, Yuanqi [3 ]
Gastro, Omary [4 ]
Wang, Lei [1 ]
Zhang, Ke [5 ]
Guo, Zhen [1 ]
机构
[1] Hainan Univ, Coll Informat & Commun Engn, Sch Cyberspace Secur, State Key Lab Marine Resource Utilisat South China, Haikou 570228, Hainan, Peoples R China
[2] Coll Informat Engn, Hainan Vocat Coll Polit Sci & Law, Haikou 571100, Hainan, Peoples R China
[3] Funky Tech Shenzhen Co Ltd, Shenzhen 518000, Peoples R China
[4] Hainan Vocat Univ Sci & Technol, Coll Informat Engn, Haikou 571126, Hainan, Peoples R China
[5] Chongqing Univ, Coll Automat, Chongqing 400044, Peoples R China
基金
中国国家自然科学基金;
关键词
Security and privacy; Federated Learning; Data reconstruction attack; Gradient leakage attack; Data privacy; PRIVACY;
D O I
10.1007/s10462-023-10550-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) improves the privacy of local training data by exchanging model updates (e.g., local gradients or updated parameters). Gradients and weights of the model have been presumed to be safe for delivery. Nevertheless, some studies have shown that gradient leakage attacks can reconstruct the input images at the pixel level, which belong to deep leakage. In addition, well understanding gradient leakage attacks are beneficial to model inversion attacks. Furthermore, gradient leakage attacks can be performed in a covert way, which does not hamper the training performance. It is significant to study gradient leakage attacks deeply. In this paper, a systematic literature review on gradient leakage attacks and privacy protection strategies. Through carefully screening, existing works about gradient leakage attacks can be categorized into three groups: (i) bias attacks, (ii) optimization-based attacks, and (iii) linear equation solver attacks. We propose one privacy attack system, i.e., single-sample reconstruction attack system (SSRAS). Furthermore, rank analysis index (RA-I) can be introduced to provide an overall estimate of the security of the neural network. In addition, we propose an Improved R-GAP Algorithm, this improved algorithm can carry out image reconstruction regardless of whether the label can be determined. Finally, experimental results show the superiority of the attack system over some other state-of-the-art attack algorithms.
引用
收藏
页码:1337 / 1374
页数:38
相关论文
共 50 条
  • [1] Gradient leakage attacks in federated learning
    Haimei Gong
    Liangjun Jiang
    Xiaoyang Liu
    Yuanqi Wang
    Omary Gastro
    Lei Wang
    Ke Zhang
    Zhen Guo
    Artificial Intelligence Review, 2023, 56 : 1337 - 1374
  • [2] FROM GRADIENT LEAKAGE TO ADVERSARIAL ATTACKS IN FEDERATED LEARNING
    Lim, Jia Qi
    Chan, Chee Seng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3602 - 3606
  • [3] Gradient Leakage Attacks in Federated Learning: Research Frontiers, Taxonomy, and Future Directions
    Yang, Haomiao
    Ge, Mengyu
    Xue, Dongyun
    Xiang, Kunlan
    Li, Hongwei
    Lu, Rongxing
    IEEE NETWORK, 2024, 38 (02): : 247 - 254
  • [4] Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning
    Hu, Jiahui
    Wang, Zhibo
    Shen, Yongsheng
    Lin, Bohan
    Sun, Peng
    Pang, Xiaoyi
    Liu, Jian
    Ren, Kui
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (02) : 1407 - 1422
  • [5] Does Differential Privacy Really Protect Federated Learning From Gradient Leakage Attacks?
    Hu, Jiahui
    Du, Jiacheng
    Wang, Zhibo
    Pang, Xiaoyi
    Zhou, Yajie
    Sun, Peng
    Ren, Kui
    IEEE TRANSACTIONS ON MOBILE COMPUTING, 2024, 23 (12) : 12635 - 12649
  • [6] Gradient-Leakage Resilient Federated Learning
    Wei, Wenqi
    Liu, Ling
    Wu, Yanzhao
    Su, Gong
    Iyengar, Arun
    2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 797 - 807
  • [7] Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
    Huang, Yangsibo
    Gupta, Samyak
    Song, Zhao
    Li, Kai
    Arora, Sanjeev
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [8] Improved Gradient Inversion Attacks and Defenses in Federated Learning
    Geng, Jiahui
    Mou, Yongli
    Li, Qing
    Li, Feifei
    Beyan, Oya
    Decker, Stefan
    Rong, Chunming
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 839 - 850
  • [9] DEFEAT: A decentralized federated learning against gradient attacks
    Lu, Guangxi
    Xiong, Zuobin
    Li, Ruinian
    Mohammad, Nael
    Li, Yingshu
    Li, Wei
    HIGH-CONFIDENCE COMPUTING, 2023, 3 (03):
  • [10] Learning To Invert: Simple Adaptive Attacks for Gradient Inversion in Federated Learning
    Wu, Ruihan
    Chen, Xiangyu
    Guo, Chuan
    Weinberger, Kilian Q.
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 2293 - 2303