FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning

被引:8
作者
Chen, Ling-Yuan [1 ]
Chiu, Te-Chuan [2 ]
Pang, Ai-Chun [1 ,2 ,3 ]
Cheng, Li-Chen [1 ]
机构
[1] Natl Taiwan Univ, Dept Comp Sci & Informat Engn, Taipei, Taiwan
[2] Acad Sinica, Res Ctr Informat Technol Innovat, Taipei, Taiwan
[3] Natl Taiwan Univ, Grad Inst Networking & Multimedia, Taipei, Taiwan
来源
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM) | 2021年
关键词
Edge AI; Federated Learning; Model Poisoning Attacks; Model Security; System Robustness;
D O I
10.1109/GLOBECOM46510.2021.9685082
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the upcoming edge AI, federated learning (FL) is a privacy-preserving framework to meet the General Data Protection Regulation (GDPR). Unfortunately, FL is vulnerable to an up-to-date security threat, model poisoning attacks. By successfully replacing the global model with the targeted poisoned model, malicious end devices can trigger backdoor attacks and manipulate the whole learning process. The traditional researches under a homogeneous environment can ideally exclude the outliers with scarce side-effects on model performance. However, in privacy-preserving FL, each end device possibly owns a few data classes and different amounts of data, forming into a substantial heterogeneous environment where outliers could be malicious or benign. To achieve the system performance and robustness of FL's framework, we should not assertively remove any local model from the global model updating procedure. Therefore, in this paper, we propose a defending strategy called FedEqual to mitigate model poisoning attacks while preserving the learning task's performance without excluding any benign models. The results show that FedEqual outperforms other state-of-the-art baselines under different heterogeneous environments based on reproduced up-to-date model poisoning attacks.
引用
收藏
页数:6
相关论文
共 23 条
  • [1] [Anonymous], 2015, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2015.123
  • [2] Bagdasaryan E., 2019, ARXIV180700459
  • [3] Bhagoji AN, 2019, PR MACH LEARN RES, V97
  • [4] Understanding Distributed Poisoning Attack in Federated Learning
    Cao, Di
    Chang, Shan
    Lin, Zhijian
    Liu, Guohua
    Sunt, Donghong
    [J]. 2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 233 - 239
  • [5] Chen X., 2020, ARXIV190601736, P21616
  • [6] Zero Knowledge Clustering Based Adversarial Mitigation in Heterogeneous Federated Learning
    Chen, Zheyi
    Tian, Pu
    Liao, Weixian
    Yu, Wei
    [J]. IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2021, 8 (02): : 1070 - 1083
  • [7] Cohen G, 2017, IEEE IJCNN, P2921, DOI 10.1109/IJCNN.2017.7966217
  • [8] Self-Balancing Federated Learning With Global Imbalanced Data in Mobile Systems
    Duan, Moming
    Liu, Duo
    Chen, Xianzhang
    Liu, Renping
    Tan, Yujuan
    Liang, Liang
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (01) : 59 - 71
  • [9] European Commission, 2018, WHAT DAT CAN WE PROC
  • [10] Fung C. J., 2020, P 23 INT S RES ATT I, P301