Data Quality Detection Mechanism Against Label Flipping Attacks in Federated Learning

被引:37
作者
Jiang, Yifeng [1 ]
Zhang, Weiwen [1 ]
Chen, Yanxi [1 ]
机构
[1] Guangdong Univ Technol, Sch Comp Sci & Technol, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Data models; Training; Computational modeling; Servers; Data privacy; Training data; Data integrity; Federated learning; deep learning; label flipping; data poisoning;
D O I
10.1109/TIFS.2023.3249568
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning (FL) is an emerging framework that enables massive clients (e.g., mobile devices or enterprises) to collaboratively construct a global model without sharing their local data. However, due to the lack of direct access to clients' data, the global model is vulnerable to be attacked by malicious clients with their poisoned data. Many strategies have been proposed to mitigate the threat of label flipping attacks, but they either require considerable computational overhead, or lack robustness, and some even cause privacy concerns. In this paper, we propose Malicious Clients Detection Federated Learning (MCDFL) to defense against the label flipping attack. It can identify malicious clients by recovering a distribution over a latent feature space to detect the data quality of each client. We demonstrate the effectiveness of our proposed strategy on two benchmark datasets, i.e., CIFAR-10 and Fashion-MNIST, by considering different neural network models and different attack scenarios. The results show that, our solution is robust to detect malicious clients without excessive costs under various conditions, where the proportion of malicious clients is in the range of 5% and 40%.
引用
收藏
页码:1625 / 1637
页数:13
相关论文
共 41 条
[1]   Privacy-Preserving Machine Learning: Threats and Solutions [J].
Al-Rubaie, Mohammad ;
Chang, J. Morris .
IEEE SECURITY & PRIVACY, 2019, 17 (02) :49-58
[2]  
Blanchard P, 2017, ADV NEUR IN, V30
[3]   STRONG DATA AUGMENTATION SANITIZES POISONING AND BACKDOOR ATTACKS WITHOUT AN ACCURACY TRADEOFF [J].
Borgnia, Eitan ;
Cherepanova, Valeriia ;
Fowl, Liam ;
Ghiasi, Amin ;
Geiping, Jonas ;
Goldblum, Micah ;
Goldstein, Tom ;
Gupta, Arjun .
2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, :3855-3859
[4]   Deep-RBF Networks for Anomaly Detection in Automotive Cyber-Physical Systems [J].
Burruss, Matthew ;
Ramakrishna, Shreyas ;
Dubey, Abhishek .
2021 IEEE INTERNATIONAL CONFERENCE ON SMART COMPUTING (SMARTCOMP 2021), 2021, :55-60
[5]  
Chen XY, 2017, Arxiv, DOI arXiv:1712.05526
[6]  
Collins L., 2021, P 38 INT C MACHINE L, P2089
[7]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[8]   Mitigating Data Poisoning Attacks On a Federated Learning-Edge Computing Network [J].
Doku, Ronald ;
Rawat, Danda B. .
2021 IEEE 18TH ANNUAL CONSUMER COMMUNICATIONS & NETWORKING CONFERENCE (CCNC), 2021,
[9]  
Dongcheng Li, 2021, 2021 8th International Conference on Dependable Systems and Their Applications (DSA), P551, DOI 10.1109/DSA52907.2021.00081
[10]   Matching Software-Generated Sketches to Face Photographs With a Very Deep CNN, Morphed Faces, and Transfer Learning [J].
Galea, Christian ;
Farrugia, Reuben A. .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2018, 13 (06) :1421-1431