FL-PTD: A Privacy Preserving Defense Strategy Against Poisoning Attacks in Federated Learning

被引:0
作者
Xia, Geming [1 ]
Chen, Jian [1 ]
Huang, Xinyi [1 ]
Yu, Chaodong [1 ]
Zhang, Zhong [1 ]
机构
[1] Natl Univ Def Technol, Coll Comp Sci & Technol, Changsha, Peoples R China
来源
2023 IEEE 47TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC | 2023年
关键词
Federated Learning; Poisoning Attacks; Privacy and Security; Machine Learning; Trust Evaluation;
D O I
10.1109/COMPSAC57700.2023.00101
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Federated learning allows participants to share models (gradients) rather than raw data to collaboratively train a global model, enhancing participants' privacy protection but making the global model more vulnerable to poisoning attacks. Poisoning attacks not only lead to the degradation of model performance but also cause a security risk. A mainstream defense strategy against poisoning attacks is that the server identifies malicious models by analyzing the models uploaded by participants. However, the attackers can use the models uploaded by the participants to recover their privacy data, leading to a privacy disclosure. Therefore, participants need to encrypt or perturb the models before uploading them so that all individuals, including the server, cannot access the model, which poses a great challenge for the defense against poisoning attacks. Moreover, many existing defense strategies against poisoning attacks only work well when the data distribution is non-independently and identically distributed. To address these issues, we propose a novel defense strategy for poisoning attacks of federated learning called FL-PTD, which can judge whether the global model is subjected to poisoning attacks without accessing the participant's local model. Besides, Our method incorporates a trust evaluation mechanism, which computes reputation scores based on the historical behavior of participants to identify malicious participants. Finally, we verify our proposed method on several real world datasets. The experimental results show that our method can effectively defend against poisoning attacks and accurately identify attackers without compromising the model's performance.
引用
收藏
页码:735 / 740
页数:6
相关论文
共 14 条
  • [1] BaFFLe: Backdoor Detection via Feedback -based Federated Learning
    Andreina, Sebastien
    Marson, Giorgia Azzurra
    Moellering, Helen
    Karame, Ghassan
    [J]. 2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, : 852 - 863
  • [2] CONTRA: Defending Against Poisoning Attacks in Federated Learning
    Awan, Sana
    Luo, Bo
    Li, Fengjun
    [J]. COMPUTER SECURITY - ESORICS 2021, PT I, 2021, 12972 : 455 - 475
  • [3] Blanchard P, 2017, ADV NEUR IN, V30
  • [4] Understanding Distributed Poisoning Attack in Federated Learning
    Cao, Di
    Chang, Shan
    Lin, Zhijian
    Liu, Guohua
    Sunt, Donghong
    [J]. 2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 233 - 239
  • [5] Dongcheng Li, 2021, 2021 8th International Conference on Dependable Systems and Their Applications (DSA), P551, DOI 10.1109/DSA52907.2021.00081
  • [6] Geiping Jonas, 2020, Advances in Neural Information Processing Systems, V33
  • [7] Advances and Open Problems in Federated Learning
    Kairouz, Peter
    McMahan, H. Brendan
    Avent, Brendan
    Bellet, Aurelien
    Bennis, Mehdi
    Bhagoji, Arjun Nitin
    Bonawitz, Kallista
    Charles, Zachary
    Cormode, Graham
    Cummings, Rachel
    D'Oliveira, Rafael G. L.
    Eichner, Hubert
    El Rouayheb, Salim
    Evans, David
    Gardner, Josh
    Garrett, Zachary
    Gascon, Adria
    Ghazi, Badih
    Gibbons, Phillip B.
    Gruteser, Marco
    Harchaoui, Zaid
    He, Chaoyang
    He, Lie
    Huo, Zhouyuan
    Hutchinson, Ben
    Hsu, Justin
    Jaggi, Martin
    Javidi, Tara
    Joshi, Gauri
    Khodak, Mikhail
    Konecny, Jakub
    Korolova, Aleksandra
    Koushanfar, Farinaz
    Koyejo, Sanmi
    Lepoint, Tancrede
    Liu, Yang
    Mittal, Prateek
    Mohri, Mehryar
    Nock, Richard
    Ozgur, Ayfer
    Pagh, Rasmus
    Qi, Hang
    Ramage, Daniel
    Raskar, Ramesh
    Raykova, Mariana
    Song, Dawn
    Song, Weikang
    Stich, Sebastian U.
    Sun, Ziteng
    Suresh, Ananda Theertha
    [J]. FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2021, 14 (1-2): : 1 - 210
  • [8] Lunze J, 2016, IEEE DECIS CONTR P, P6838, DOI 10.1109/CDC.2016.7799322
  • [9] Ma Z., 2021, IEEE T SERV COMPUT
  • [10] ShieldFL: Mitigating Model Poisoning Attacks in Privacy-Preserving Federated Learning
    Ma, Zhuoran
    Ma, Jianfeng
    Miao, Yinbin
    Li, Yingjiu
    Deng, Robert H.
    [J]. IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 1639 - 1654