Efficiently Achieving Privacy Preservation and Poisoning Attack Resistance in Federated Learning

被引:6
作者
Li, Xueyang [1 ,2 ]
Yang, Xue [1 ]
Zhou, Zhengchun [1 ]
Lu, Rongxing [3 ]
机构
[1] Southwest Jiaotong Univ, Sch Informat Sci & Technol, Chengdu 611730, Peoples R China
[2] Xidian Univ, Sch Cyber Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Univ New Brunswick, Canadian Inst Cybersecur, Fac Comp Sci, Fredericton, NB E3B 5A3, Canada
基金
中国国家自然科学基金;
关键词
Federated learning; Additives; Computational efficiency; privacy preservation; poisoning attack; secure multi-party computation;
D O I
10.1109/TIFS.2024.3378006
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Federated learning enables clients to train models locally and provide local updates to the server instead of raw dataset, thereby preserving data privacy to some extent. However, adversaries can still pry users' privacy by inferring updates, and compromise the integrity of the global model through poisoning attack. Therefore, many related works have integrated poisoning attack detection method with secure computation to address both issues. Nevertheless, they still encounter two major challenges: 1) the efficiency is too low to be applied in practice; and 2) the privacy is still at risk of being leaked, e.g., the distance of two local updates for detecting poisoning attack could be exposed to the server. Aiming at the challenges, in this paper, we propose an Efficient Privacy-preserving and Poisoning attack Resistant scheme for Federated Learning, named EPPRFL, which preserves the privacy for local updates and some intermediate information used to detect poisoning attack. In particular, we design an efficient poisoning attack detection method based on Euclidean distance filtering & clipping technique, named F&C. Then, considering the privacy preservation of the F&C method, we efficiently customize secure comparison, secure median, secure distance computation and secure clipping protocols based on additive secret sharing. Experimental results and theoretical analysis show that compared with existing schemes, EPPRFL can better resist poisoning attack and has lower computational and communication overheads on the client side.
引用
收藏
页码:4358 / 4373
页数:16
相关论文
共 33 条
  • [1] Andrew G, 2021, ADV NEUR IN, V34
  • [2] [Anonymous], 2012, Int. J. Inf.Secur., V11, P403
  • [3] Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
  • [4] Bar-Ilan J., 1989, Proceedings of the Eighth Annual ACM Symposium on Principles of Distributed Computing, P201, DOI 10.1145/72981.72995
  • [5] BEAVER D, 1992, LECT NOTES COMPUT SC, V576, P420
  • [6] Blanchard P, 2017, ADV NEUR IN, V30
  • [7] High-performance secure multi-party computation for data mining applications
    Bogdanov, Dan
    Niitsoo, Margus
    Toft, Tomas
    Willemson, Jan
    [J]. INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2012, 11 (06) : 403 - 418
  • [8] Practical Secure Aggregation for Privacy-Preserving Machine Learning
    Bonawitz, Keith
    Ivanov, Vladimir
    Kreuter, Ben
    Marcedone, Antonio
    McMahan, H. Brendan
    Patel, Sarvar
    Ramage, Daniel
    Segal, Aaron
    Seth, Karn
    [J]. CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, : 1175 - 1191
  • [9] FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
    Cao, Xiaoyu
    Fang, Minghong
    Liu, Jia
    Gong, Neil Zhenqiang
    [J]. 28TH ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2021), 2021,