Byzantine-Robust Decentralized Federated Learning

被引:5
作者
Fang, Minghong [1 ]
Zhang, Zifan [2 ]
Hairi [3 ]
Khanduri, Prashant [4 ]
Liu, Jia [5 ]
Lu, Songtao [6 ]
Liu, Yuchen [2 ]
Gong, Neil [7 ]
机构
[1] Univ Louisville, Louisville, KY 40292 USA
[2] North Carolina State Univ, Raleigh, NC USA
[3] Univ Wisconsin Whitewater, Whitewater, WI USA
[4] Wayne State Univ, Detroit, MI USA
[5] Ohio State Univ, Columbus, OH USA
[6] IBM Thomas J Watson Res Ctr, Yorktown Hts, NY USA
[7] Duke Univ, Durham, NC USA
来源
PROCEEDINGS OF THE 2024 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2024 | 2024年
关键词
Decentralized Federated Learning; Poisoning Attacks; Byzantine Robustness;
D O I
10.1145/3658644.3670307
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) enables multiple clients to collaboratively train machine learning models without revealing their private training data. In conventional FL, the system follows the server-assisted architecture (server-assisted FL), where the training process is coordinated by a central server. However, the server-assisted FL framework suffers from poor scalability due to a communication bottleneck at the server, and trust dependency issues. To address challenges, decentralized federated learning (DFL) architecture has been proposed to allow clients to train models collaboratively in a serverless and peer-to-peer manner. However, due to its fully decentralized nature, DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients. To date, only a limited number of Byzantine-robust DFL methods have been proposed, most of which are either communication-inefficient or remain vulnerable to advanced poisoning attacks. In this paper, we propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization) to defend against poisoning attacks in DFL. In BALANCE, each client leverages its own local model as a similarity reference to determine if the received model is malicious or benign. We establish the theoretical convergence guarantee for BALANCE under poisoning attacks in both strongly convex and non-convex settings. Furthermore, the convergence rate of BALANCE under poisoning attacks matches those of the state-of-the-art counterparts in Byzantine-free settings. Extensive experiments also demonstrate that BALANCE outperforms existing DFL methods and effectively defends against poisoning attacks.
引用
收藏
页码:2874 / 2888
页数:15
相关论文
共 61 条
[1]   Deep Federated Learning for Autonomous Driving [J].
Anh Nguyen ;
Tuong Do ;
Minh Tran ;
Nguyen, Binh X. ;
Chien Duong ;
Tu Phan ;
Tjiputra, Erman ;
Tran, Quang D. .
2022 IEEE INTELLIGENT VEHICLES SYMPOSIUM (IV), 2022, :1824-1830
[2]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[3]  
Baruch M, 2019, ADV NEUR IN, V32
[4]  
Beltran ETM, 2023, Arxiv, DOI arXiv:2211.08413
[5]  
Biggio B., 2012, P 29 INT C MACH LEAR, DOI 10.48550/arxiv.1206.6389
[6]  
Blanchard P, 2017, ADV NEUR IN, V30
[7]  
Caldas S., 2018, arXiv
[8]   FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information [J].
Cao, Xiaoyu ;
Jia, Jinyuan ;
Zhang, Zaixi ;
Gong, Neil Zhenqiang .
2023 IEEE SYMPOSIUM ON SECURITY AND PRIVACY, SP, 2023, :1366-1383
[9]   FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping [J].
Cao, Xiaoyu ;
Fang, Minghong ;
Liu, Jia ;
Gong, Neil Zhenqiang .
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
[10]  
Cao Xiaoyu, 2022, IEEE Transactions on Information Forensics and Security