ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning

被引:29
作者
Guo, Jingjing [1 ]
Li, Haiyang [1 ]
Huang, Feiran [2 ]
Liu, Zhiquan [2 ]
Peng, Yanguo [1 ]
Li, Xinghua [1 ]
Ma, Jianfeng [1 ]
Menon, Varun G. [3 ]
Igorevich, Konstantin Kostromitin [4 ]
机构
[1] Xidian Univ, Xian 710126, Peoples R China
[2] Jinan Univ, Guangzhou 510632, Peoples R China
[3] SCMS Sch Engn & Technol, Ernakulam 683576, India
[4] Natl Res Univ, South Ural State Univ, Celabinsk 454080, Russia
基金
中国国家自然科学基金; 中国博士后科学基金; 美国国家科学基金会;
关键词
Servers; Collaborative work; Training; Privacy; Computational modeling; Informatics; Data models; Federated learning; malicious user detection; poisoning attack; privacy protection; reliability; SECURE;
D O I
10.1109/TII.2022.3156645
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, federated learning has received widespread attention, which will promote the implementation of artificial intelligence technology in various fields. Privacy-preserving technologies are applied to users' local models to protect users' privacy. Such operations make the server not see the true model parameters of each user, which opens wider door for a malicious user to upload malicious parameters and make the training result converge to an ineffective model. To solve this problem, in this article, we propose a poisoning attack defense framework for horizontal federated learning systems called ADFL. Specifically, we design a proof generation method for users to generate proofs to verify whether it is malicious or not. An aggregation rule is also proposed to make sure the global model has a high accuracy. Several verification experiments were conducted and the results show that our method can detect malicious user effectively and ensure the global model has a high accuracy.
引用
收藏
页码:6526 / 6536
页数:11
相关论文
共 27 条
  • [1] A Survey on Federated Learning: The Journey From Centralized to Distributed On-Site Learning and Beyond
    AbdulRahman, Sawsan
    Tout, Hanine
    Ould-Slimane, Hakima
    Mourad, Azzam
    Talhi, Chamseddine
    Guizani, Mohsen
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (07): : 5476 - 5497
  • [2] Bhagoji AN, 2019, PR MACH LEARN RES, V97
  • [3] Blanchard P, 2017, ADV NEUR IN, V30
  • [4] Blomer J, 2011, SHARE SECRET, P159, DOI [10.1007/978-3-642-15328-0_17, DOI 10.1007/978-3-642-15328-0_17]
  • [5] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [6] Understanding Distributed Poisoning Attack in Federated Learning
    Cao, Di
    Chang, Shan
    Lin, Zhijian
    Liu, Guohua
    Sunt, Donghong
    [J]. 2019 IEEE 25TH INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2019, : 233 - 239
  • [7] Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
  • [8] VFL: A Verifiable Federated Learning With Privacy-Preserving for Big Data in Industrial IoT
    Fu, Anmin
    Zhang, Xianglong
    Xiong, Naixue
    Gao, Yansong
    Wang, Huaqun
    Zhang, Jing
    [J]. IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (05) : 3316 - 3326
  • [9] Fung C, 2020, Arxiv, DOI arXiv:1808.04866
  • [10] Guerraoui R., 2018, PR MACH LEARN RES, P3521