Byzantine-Robust Decentralized Federated Learning

被引:5
作者
Fang, Minghong [1 ]
Zhang, Zifan [2 ]
Hairi [3 ]
Khanduri, Prashant [4 ]
Liu, Jia [5 ]
Lu, Songtao [6 ]
Liu, Yuchen [2 ]
Gong, Neil [7 ]
机构
[1] Univ Louisville, Louisville, KY 40292 USA
[2] North Carolina State Univ, Raleigh, NC USA
[3] Univ Wisconsin Whitewater, Whitewater, WI USA
[4] Wayne State Univ, Detroit, MI USA
[5] Ohio State Univ, Columbus, OH USA
[6] IBM Thomas J Watson Res Ctr, Yorktown Hts, NY USA
[7] Duke Univ, Durham, NC USA
来源
PROCEEDINGS OF THE 2024 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2024 | 2024年
关键词
Decentralized Federated Learning; Poisoning Attacks; Byzantine Robustness;
D O I
10.1145/3658644.3670307
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) enables multiple clients to collaboratively train machine learning models without revealing their private training data. In conventional FL, the system follows the server-assisted architecture (server-assisted FL), where the training process is coordinated by a central server. However, the server-assisted FL framework suffers from poor scalability due to a communication bottleneck at the server, and trust dependency issues. To address challenges, decentralized federated learning (DFL) architecture has been proposed to allow clients to train models collaboratively in a serverless and peer-to-peer manner. However, due to its fully decentralized nature, DFL is highly vulnerable to poisoning attacks, where malicious clients could manipulate the system by sending carefully-crafted local models to their neighboring clients. To date, only a limited number of Byzantine-robust DFL methods have been proposed, most of which are either communication-inefficient or remain vulnerable to advanced poisoning attacks. In this paper, we propose a new algorithm called BALANCE (Byzantine-robust averaging through local similarity in decentralization) to defend against poisoning attacks in DFL. In BALANCE, each client leverages its own local model as a similarity reference to determine if the received model is malicious or benign. We establish the theoretical convergence guarantee for BALANCE under poisoning attacks in both strongly convex and non-convex settings. Furthermore, the convergence rate of BALANCE under poisoning attacks matches those of the state-of-the-art counterparts in Byzantine-free settings. Extensive experiments also demonstrate that BALANCE outperforms existing DFL methods and effectively defends against poisoning attacks.
引用
收藏
页码:2874 / 2888
页数:15
相关论文
共 61 条
[11]  
Cao Xiaoyu, 2022, CVPR WORKSH
[12]  
Chu Tianyue, 2023, NDSS
[13]  
Dai R, 2022, PR MACH LEARN RES
[14]  
El Mhamdi El Mahdi, 2018, P MACHINE LEARNING R, V80
[15]  
El-Mhamdi El-Mandi, 2021, Advances in Neural Information Processing Systems, V34
[16]  
Fang MH, 2024, Arxiv, DOI arXiv:2406.10416
[17]   AFLGuard: Byzantine-robust Asynchronous Federated Learning [J].
Fang, Minghong ;
Liu, Jia ;
Gong, Neil Zhenqiang ;
Bentley, Elizabeth S. .
PROCEEDINGS OF THE 38TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2022, 2022, :632-646
[18]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[19]  
Fowl L., 2022, ICLR
[20]  
Fu SH, 2021, Arxiv, DOI arXiv:1912.11464