Fair Detection of Poisoning Attacks in Federated Learning

被引:11
作者
Singh, Ashneet Khandpur [1 ]
Blanco-Justicia, Alberto [1 ]
Domingo-Ferrer, Josep [1 ]
Sanchez, David [1 ]
Rebollo-Monedero, David [1 ]
机构
[1] Univ Rovira & Virgili, Dept Comp Engn & Math, CYBERCAT Ctr Cybersecur Res Catalonia, UNESCO Chair Data Privacy, Av Paisos Catalans 26, E-43007 Tarragona, Catalonia, Spain
来源
2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI) | 2020年
基金
欧盟地平线“2020”;
关键词
Federated learning; Security; Privacy; Fairness;
D O I
10.1109/ICTAI50040.2020.00044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning is a decentralized machine learning technique that aggregates partial models trained by a set of clients on their own private data to obtain a global model. This technique is vulnerable to security attacks, such as model poisoning, whereby malicious clients submit bad updates in order to prevent the model from converging or to introduce artificial bias in the classification. Applying anti-poisoning techniques might lead to the discrimination of minority groups whose data are significantly and legitimately different from those of the majority of clients. In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on a standard machine learning data set shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with standard poisoning detection techniques.
引用
收藏
页码:224 / 229
页数:6
相关论文
共 50 条
  • [21] Data Poisoning Attacks Against Federated Learning Systems
    Tolpegin, Vale
    Truex, Stacey
    Gursoy, Mehmet Emre
    Liu, Ling
    COMPUTER SECURITY - ESORICS 2020, PT I, 2020, 12308 : 480 - 501
  • [22] Suppressing Poisoning Attacks on Federated Learning for Medical Imaging
    Alkhunaizi, Naif
    Kamzolov, Dmitry
    Takac, Martin
    Nandakumar, Karthik
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VIII, 2022, 13438 : 673 - 683
  • [23] Poisoning Attacks against Federated Learning in Load Forecasting of Smart Energy
    Qureshi, Naik Bakht Sania
    Kim, Dong-Hoon
    Lee, Jiwoo
    Lee, Eun-Kyu
    PROCEEDINGS OF THE IEEE/IFIP NETWORK OPERATIONS AND MANAGEMENT SYMPOSIUM 2022, 2022,
  • [24] CONTRA: Defending Against Poisoning Attacks in Federated Learning
    Awan, Sana
    Luo, Bo
    Li, Fengjun
    COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 455 - 475
  • [25] Untargeted Poisoning Attack Detection in Federated Learning via Behavior AttestationAl
    Mallah, Ranwa Al
    Lopez, David
    Badu-Marfo, Godwin
    Farooq, Bilal
    IEEE ACCESS, 2023, 11 : 125064 - 125079
  • [26] Data and Model Poisoning Backdoor Attacks on Wireless Federated Learning, and the Defense Mechanisms: A Comprehensive Survey
    Wan, Yichen
    Qu, Youyang
    Ni, Wei
    Xiang, Yong
    Gao, Longxiang
    Hossain, Ekram
    IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, 2024, 26 (03): : 1861 - 1897
  • [27] Data Poisoning Detection in Federated Learning
    Khuu, Denise-Phi
    Sober, Michael
    Kaaser, Dominik
    Fischer, Mathias
    Schulte, Stefan
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 1549 - 1558
  • [28] Detecting and mitigating poisoning attacks in federated learning using generative adversarial networks
    Zhao, Ying
    Chen, Junjun
    Zhang, Jiale
    Wu, Di
    Blumenstein, Michael
    Yu, Shui
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2022, 34 (07)
  • [29] SPFL: A Self-Purified Federated Learning Method Against Poisoning Attacks
    Liu, Zizhen
    He, Weiyang
    Chang, Chip-Hong
    Ye, Jing
    Li, Huawei
    Li, Xiaowei
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6604 - 6619
  • [30] ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Li, Yingjiu
    Deng, Robert H.
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1639 - 1654