Confident Federated Learning to Tackle Label Flipped Data Poisoning Attacks

被引:0
作者
Ovi, Pretom Roy [1 ]
Gangopadhyay, Aryya [1 ]
Erbacher, Robert F. [2 ]
Busart, Carl [2 ]
机构
[1] Univ Maryland, Ctr Realtime Distributed Sensing & Auton, Baltimore, MD 21201 USA
[2] US DEVCOM Army Res Lab, Adelphi, MD USA
来源
ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS V | 2023年 / 12538卷
关键词
Federated Learning; Data Poisoning Attacks; Adversarial Attacks; Label Flipping Attacks;
D O I
10.1117/12.2663911
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Learning (FL) enables collaborative model building among a large number of participants without revealing the sensitive data to the central server. However, because of its distributed nature, FL has limited control over the local data and corresponding training process. Therefore, it is susceptible to data poisoning attacks where malicious workers use malicious training data to train the model. Furthermore, attackers on the worker side can easily manipulate local data by swapping the labels of training instances to initiate data poisoning attacks. And local workers under such attacks carry incorrect information to the server, poison the global model, and cause misclassifications. So, detecting and preventing poisonous training samples from local training is crucial in federated training. To address it, we propose a federated learning framework, namely Confident Federated Learning to prevent data poisoning attacks on local workers. Here, we first validate the label quality of training samples by characterizing and identifying label errors in the training data and then exclude the detected mislabeled samples from the local training. To this aim, we experiment with our proposed approach on MNIST, Fashion-MNIST, and CIFAR-10 dataset and experimental results validated the robustness of the proposed framework against the data poisoning attacks by successfully detecting the mislabeled samples with above 85% accuracy.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks
    Carvalho, Ines
    Huff, Kenton
    Gruenwald, Le
    Bernardino, Jorge
    APPLIED SCIENCES-BASEL, 2024, 14 (22):
  • [32] On the Performance Impact of Poisoning Attacks on Load Forecasting in Federated Learning
    Qureshi, Naik Bakht Sania
    Kim, Dong-Hoon
    Lee, Jiwoo
    Lee, Eun-Kyu
    UBICOMP/ISWC '21 ADJUNCT: PROCEEDINGS OF THE 2021 ACM INTERNATIONAL JOINT CONFERENCE ON PERVASIVE AND UBIQUITOUS COMPUTING AND PROCEEDINGS OF THE 2021 ACM INTERNATIONAL SYMPOSIUM ON WEARABLE COMPUTERS, 2021, : 64 - 66
  • [33] FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning
    Chen, Ling-Yuan
    Chiu, Te-Chuan
    Pang, Ai-Chun
    Cheng, Li-Chen
    2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
  • [34] FLCert: Provably Secure Federated Learning Against Poisoning Attacks
    Cao, Xiaoyu
    Zhang, Zaixi
    Jia, Jinyuan
    Gong, Neil Zhenqiang
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 3691 - 3705
  • [35] DPFLA: Defending Private Federated Learning Against Poisoning Attacks
    Feng, Xia
    Cheng, Wenhao
    Cao, Chunjie
    Wang, Liangmin
    Sheng, Victor S.
    IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) : 1480 - 1491
  • [36] Secure and verifiable federated learning against poisoning attacks in IoMT
    Niu, Shufen
    Zhou, Xusheng
    Wang, Ning
    Kong, Weiying
    Chen, Lihua
    COMPUTERS & ELECTRICAL ENGINEERING, 2025, 122
  • [37] Poisoning Attacks in Federated Learning: An Evaluation on Traffic Sign Classification
    Nuding, Florian
    Mayer, Rudolf
    PROCEEDINGS OF THE TENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, CODASPY 2020, 2020, : 168 - 170
  • [38] Fair detection of poisoning attacks in federated learning on non-i.i.d. data
    Singh, Ashneet Khandpur
    Blanco-Justicia, Alberto
    Domingo-Ferrer, Josep
    DATA MINING AND KNOWLEDGE DISCOVERY, 2023, 37 (05) : 1998 - 2023
  • [39] Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
    Wan, Yichen
    Qu, Youyang
    Ni, Wei
    Xiang, Yong
    Gao, Longxiang
    Hossain, Ekram
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2024, 26 (03): : 1861 - 1897
  • [40] Federated Learning Under Attack: Exposing Vulnerabilities Through Data Poisoning Attacks in Computer Networks
    Nowroozi, Ehsan
    Haider, Imran
    Taheri, Rahim
    Conti, Mauro
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2025, 22 (01): : 822 - 831