Fair Detection of Poisoning Attacks in Federated Learning

被引:11
作者
Singh, Ashneet Khandpur [1 ]
Blanco-Justicia, Alberto [1 ]
Domingo-Ferrer, Josep [1 ]
Sanchez, David [1 ]
Rebollo-Monedero, David [1 ]
机构
[1] Univ Rovira & Virgili, Dept Comp Engn & Math, CYBERCAT Ctr Cybersecur Res Catalonia, UNESCO Chair Data Privacy, Av Paisos Catalans 26, E-43007 Tarragona, Catalonia, Spain
来源
2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI) | 2020年
基金
欧盟地平线“2020”;
关键词
Federated learning; Security; Privacy; Fairness;
D O I
10.1109/ICTAI50040.2020.00044
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning is a decentralized machine learning technique that aggregates partial models trained by a set of clients on their own private data to obtain a global model. This technique is vulnerable to security attacks, such as model poisoning, whereby malicious clients submit bad updates in order to prevent the model from converging or to introduce artificial bias in the classification. Applying anti-poisoning techniques might lead to the discrimination of minority groups whose data are significantly and legitimately different from those of the majority of clients. In this work, we strive to strike a balance between fighting poisoning and accommodating diversity to help learning fairer and less discriminatory federated learning models. In this way, we forestall the exclusion of diverse clients while still ensuring detection of poisoning attacks. Empirical work on a standard machine learning data set shows that employing our approach to tell legitimate from malicious updates produces models that are more accurate than those obtained with standard poisoning detection techniques.
引用
收藏
页码:224 / 229
页数:6
相关论文
共 50 条
[31]   A Differentially Private Federated Learning Model Against Poisoning Attacks in Edge Computing [J].
Zhou, Jun ;
Wu, Nan ;
Wang, Yisong ;
Gu, Shouzhen ;
Cao, Zhenfu ;
Dong, Xiaolei ;
Choo, Kim-Kwang Raymond .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (03) :1941-1958
[32]   Federated Learning: A Comparative Study of Defenses Against Poisoning Attacks [J].
Carvalho, Ines ;
Huff, Kenton ;
Gruenwald, Le ;
Bernardino, Jorge .
APPLIED SCIENCES-BASEL, 2024, 14 (22)
[33]   FedEqual: Defending Model Poisoning Attacks in Heterogeneous Federated Learning [J].
Chen, Ling-Yuan ;
Chiu, Te-Chuan ;
Pang, Ai-Chun ;
Cheng, Li-Chen .
2021 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM), 2021,
[34]   DPFLA: Defending Private Federated Learning Against Poisoning Attacks [J].
Feng, Xia ;
Cheng, Wenhao ;
Cao, Chunjie ;
Wang, Liangmin ;
Sheng, Victor S. .
IEEE TRANSACTIONS ON SERVICES COMPUTING, 2024, 17 (04) :1480-1491
[35]   Clean-label poisoning attacks on federated learning for IoT [J].
Yang, Jie ;
Zheng, Jun ;
Baker, Thar ;
Tang, Shuai ;
Tan, Yu-an ;
Zhang, Quanxin .
EXPERT SYSTEMS, 2023, 40 (05)
[36]   Targeted Clean-Label Poisoning Attacks on Federated Learning [J].
Patel, Ayushi ;
Singh, Priyanka .
RECENT TRENDS IN IMAGE PROCESSING AND PATTERN RECOGNITION, RTIP2R 2022, 2023, 1704 :231-243
[37]   Poisoning Attacks in Federated Learning: An Evaluation on Traffic Sign Classification [J].
Nuding, Florian ;
Mayer, Rudolf .
PROCEEDINGS OF THE TENTH ACM CONFERENCE ON DATA AND APPLICATION SECURITY AND PRIVACY, CODASPY 2020, 2020, :168-170
[38]   Identifying alternately poisoning attacks in federated learning online using trajectory anomaly detection method [J].
Ding, Zhiying ;
Wang, Wenshuo ;
Li, Xu ;
Wang, Xuan ;
Jeon, Gwanggil ;
Zhao, Jindong ;
Mu, Chunxiao .
SCIENTIFIC REPORTS, 2024, 14 (01)
[39]   Defending Against Data Poisoning Attacks: From Distributed Learning to Federated Learning [J].
Tian, Yuchen ;
Zhang, Weizhe ;
Simpson, Andrew ;
Liu, Yang ;
Jiang, Zoe Lin .
COMPUTER JOURNAL, 2023, 66 (03) :711-726
[40]   A comprehensive analysis of model poisoning attacks in federated learning for autonomous vehicles: A benchmark study [J].
Almutairi, Suzan ;
Barnawi, Ahmed .
RESULTS IN ENGINEERING, 2024, 24