DEFEAT: A decentralized federated learning against gradient attacks

被引:7
|
作者
Lu, Guangxi [1 ]
Xiong, Zuobin [1 ]
Li, Ruinian [2 ]
Mohammad, Nael [3 ]
Li, Yingshu [1 ]
Li, Wei [1 ]
机构
[1] Georgia State Univ, Dept Comp Sci, Atlanta, GA 30303 USA
[2] Bowling Green State Univ, Dept Comp Sci, Bowling Green, OH 43403 USA
[3] Al Quds Open Univ, Comp Informat Syst Dept, Ramallah 90917, Palestine
来源
HIGH-CONFIDENCE COMPUTING | 2023年 / 3卷 / 03期
基金
美国国家科学基金会;
关键词
Federated learning; Peer to peer network; Privacy protection;
D O I
10.1016/j.hcc.2023.100128
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
As one of the most promising machine learning frameworks emerging in recent years, Federated learning (FL) has received lots of attention. The main idea of centralized FL is to train a global model by aggregating local model parameters and maintain the private data of users locally. However, recent studies have shown that traditional centralized federated learning is vulnerable to various attacks, such as gradient attacks, where a malicious server collects local model gradients and uses them to recover the private data stored on the client. In this paper, we propose a decentralized federated learning against aTtacks (DEFEAT) framework and use it to defend the gradient attack. The decentralized structure adopted by this paper uses a peer-to-peer network to transmit, aggregate, and update local models. In DEFEAT, the participating clients only need to communicate with their single-hop neighbors to learn the global model, in which the model accuracy and communication cost during the training process of DEFEAT are well balanced. Through a series of experiments and detailed case studies on real datasets, we evaluate the excellent model performance of DEFEAT and the privacy preservation capability against gradient attacks. (c) 2023 The Author(s). Published by Elsevier B.V. on behalf of Shandong University. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Decentralized Defense: Leveraging Blockchain against Poisoning Attacks in Federated Learning Systems
    Thennakoon, Rashmi
    Wanigasundara, Arosha
    Weerasinghe, Sanjaya
    Seneviratne, Chatura
    Siriwardhana, Yushan
    Liyanage, Madhusanka
    2024 IEEE 21ST CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE, CCNC, 2024, : 950 - 955
  • [2] Enhancing Privacy of Spatiotemporal Federated Learning Against Gradient Inversion Attacks
    Zheng, Lele
    Cao, Tang
    Jiang, Renhe
    Taura, Kenjiro
    Shen, Yulong
    Li, Sheng
    Yoshikawa, Masatoshi
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, PT I, DASFAA 2024, 2024, 14850 : 457 - 473
  • [3] Gradient leakage attacks in federated learning
    Gong, Haimei
    Jiang, Liangjun
    Liu, Xiaoyang
    Wang, Yuanqi
    Gastro, Omary
    Wang, Lei
    Zhang, Ke
    Guo, Zhen
    ARTIFICIAL INTELLIGENCE REVIEW, 2023, 56 (SUPPL 1) : 1337 - 1374
  • [4] Gradient leakage attacks in federated learning
    Haimei Gong
    Liangjun Jiang
    Xiaoyang Liu
    Yuanqi Wang
    Omary Gastro
    Lei Wang
    Ke Zhang
    Zhen Guo
    Artificial Intelligence Review, 2023, 56 : 1337 - 1374
  • [5] Network Gradient Descent Algorithm for Decentralized Federated Learning
    Wu, Shuyuan
    Huang, Danyang
    Wang, Hansheng
    JOURNAL OF BUSINESS & ECONOMIC STATISTICS, 2023, 41 (03) : 806 - 818
  • [6] Shield Against Gradient Leakage Attacks: Adaptive Privacy-Preserving Federated Learning
    Hu, Jiahui
    Wang, Zhibo
    Shen, Yongsheng
    Lin, Bohan
    Sun, Peng
    Pang, Xiaoyi
    Liu, Jian
    Ren, Kui
    IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (02) : 1407 - 1422
  • [7] Defending against gradient inversion attacks in federated learning via statistical machine unlearning
    Gao, Kun
    Zhu, Tianqing
    Ye, Dayong
    Zhou, Wanlei
    KNOWLEDGE-BASED SYSTEMS, 2024, 299
  • [8] Evaluating Gradient Inversion Attacks and Defenses in Federated Learning
    Huang, Yangsibo
    Gupta, Samyak
    Song, Zhao
    Li, Kai
    Arora, Sanjeev
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [9] Improved Gradient Inversion Attacks and Defenses in Federated Learning
    Geng, Jiahui
    Mou, Yongli
    Li, Qing
    Li, Feifei
    Beyan, Oya
    Decker, Stefan
    Rong, Chunming
    IEEE TRANSACTIONS ON BIG DATA, 2024, 10 (06) : 839 - 850
  • [10] FROM GRADIENT LEAKAGE TO ADVERSARIAL ATTACKS IN FEDERATED LEARNING
    Lim, Jia Qi
    Chan, Chee Seng
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3602 - 3606