Byzantine-robust Federated Learning through Collaborative Malicious Gradient Filtering

被引:34
作者
Xu, Jian [1 ]
Huang, Shao-Lun [1 ]
Song, Linqi [2 ]
Lan, Tian [3 ]
机构
[1] Tsinghua Univ, Beijing, Peoples R China
[2] City Univ Hong Kong, Hong Kong, Peoples R China
[3] George Washington Univ, Washington, DC 20052 USA
来源
2022 IEEE 42ND INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2022) | 2022年
基金
国家重点研发计划;
关键词
Federated Learning; Attack Detection; Distributed Learning Security;
D O I
10.1109/ICDCS54860.2022.00120
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Gradient-based training in federated learning is known to be vulnerable to faulty/malicious clients, which are often modeled as Byzantine clients. To this end, previous work either makes use of auxiliary data at parameter server to verify the received gradients (e.g., by computing validation error rate) or leverages statistic-based methods (e.g. median and Krum) to identify and remove malicious gradients from Byzantine clients. In this paper, we remark that auxiliary data may not always be available in practice and focus on the statistic-based approach. However, recent work on model poisoning attacks has shown that well-crafted attacks can circumvent most of median- and distance-based statistical defense methods, making malicious gradients indistinguishable from honest ones. To tackle this challenge, we show that the element-wise sign of gradient vector can provide valuable insight in detecting model poisoning attacks. Based on our theoretical analysis of the Little is Enough attack, we propose a novel approach called SignGuard to enable Byzantine-robust federated learning through collaborative malicious gradient filtering. More precisely, the received gradients are first processed to generate relevant magnitude, sign, and similarity statistics, which are then collaboratively utilized by multiple filters to eliminate malicious gradients before final aggregation. Finally, extensive experiments of image and text classification tasks are conducted under recently proposed attacks and defense strategies. The numerical results demonstrate the effectiveness and superiority of our proposed approach.
引用
收藏
页码:1223 / 1235
页数:13
相关论文
共 45 条
[1]  
Alistarh D, 2018, ADV NEUR IN, V31
[2]  
Allen-Zhu Z., 2021, INT C LEARNING REPRE
[3]  
[Anonymous], 2009, CIFAR-100 Dataset
[4]  
Baruch M, 2019, ADV NEUR IN, V32
[5]  
Bernstein J, 2018, PR MACH LEARN RES, V80
[6]  
Blanchard P, 2017, ADV NEUR IN, V30
[7]   Optimization Methods for Large-Scale Machine Learning [J].
Bottou, Leon ;
Curtis, Frank E. ;
Nocedal, Jorge .
SIAM REVIEW, 2018, 60 (02) :223-311
[8]   FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping [J].
Cao, Xiaoyu ;
Fang, Minghong ;
Liu, Jia ;
Gong, Neil Zhenqiang .
28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,
[9]  
Cao XY, 2021, AAAI CONF ARTIF INTE, V35, P6885
[10]   Distributed Gradient Descent Algorithm Robust to an Arbitrary Number of Byzantine Attackers [J].
Cao, Xinyang ;
Lai, Lifeng .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2019, 67 (22) :5850-5864