AWFC: Preventing Label Flipping Attacks Towards Federated Learning for Intelligent IoT

被引:6
作者
Lv, Zhuo [1 ]
Cao, Hongbo [2 ]
Zhang, Feng [3 ]
Ren, Yuange [2 ]
Wang, Bin [3 ]
Chen, Cen [1 ]
Li, Nuannuan [1 ]
Chang, Hao [1 ]
Wang, Wei [2 ]
机构
[1] State Grid Henan Elect Power Res Inst, Zhengzhou 450052, Peoples R China
[2] Beijing Jiaotong Univ, Beijing Key Lab Secur & Privacy Intelligent Trans, 3 Shangyuancun, Beijing 100044, Peoples R China
[3] Zhejiang Key Lab Multidimens Percept Technol Appl, Hangzhou 310053, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
federated learning; label flipping attacks; poisoning attacks; distributed machine learning; intrusion detection; AUDIT DATA STREAMS; DETECTING ANOMALIES; APPS; FLOW;
D O I
10.1093/comjnl/bxac124
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Centralized machine learning methods require the aggregation of data collected from clients. Due to the awareness of data privacy, however, the aggregation of raw data collected by Internet of Things (IoT) devices is not feasible in many scenarios. Federated learning (FL), a kind of distributed learning framework, can be running on multiple IoT devices. It aims to resolve the issues of privacy leakage by training a model locally on the client-side, other than on the server-side that aggregates all the raw data. However, there are still threats of poisoning attacks in FL. Label flipping attacks, typical data poisoning attacks in FL, aim to poison the global model by sending model updates trained by the data with mismatched labels. The central parameter aggregation server is hard to detect the label flipping attacks due to its inaccessibility to the client in a typical FL system. In this work, we are motivated to prevent label flipping poisoning attacks by observing the changes in model parameters that were trained by different single labels. We propose a novel detection method called average weight of each class in its associated fully connected layer. In this method, we detect label flipping attacks by identifying the differences of classes in the data based on the weight assignments in a fully connected layer of the neural network model and use the statistical algorithm to recognize the malicious clients. We conduct extensive experiments on benchmark data like Fashion-MNIST and Intrusion Detection Evaluation Dataset (CIC-IDS2017). Comprehensive experimental results demonstrated that our method has the detection accuracy over 90% for the identification of the attackers flipping labels.
引用
收藏
页码:2849 / 2859
页数:11
相关论文
共 51 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[3]  
Baruch M, 2019, ADV NEUR IN, V32
[4]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[5]  
Biggio B., 2012, ICML, V2, P1807, DOI [10.48550/arxiv.1206.6389, 10.48550/arXiv.1206.6389, DOI 10.48550/ARXIV.1206.6389]
[6]  
Blanchard P, 2017, ADV NEUR IN, V30
[7]   DAPASA: Detecting Android Piggybacked Apps Through Sensitive Subgraph Analysis [J].
Fan, Ming ;
Liu, Jun ;
Wang, Wei ;
Li, Haifei ;
Tian, Zhenzhou ;
Liu, Ting .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2017, 12 (08) :1772-1785
[8]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[9]  
Fenicia F., 2021, IEEE T INTELL TRANSP, DOI [10.5194/hess-2021-264, DOI 10.5194/HESS-2021-264, DOI 10.1109/TITS.2021.3129458]
[10]  
Fu S., 2019, ABS191211464 CORR