Perception Poisoning Attacks in Federated Learning

被引:10
作者
Chow, Ka-Ho [1 ]
Liu, Ling [1 ]
机构
[1] Georgia Inst Technol, Sch Comp Sci, Atlanta, GA 30332 USA
来源
2021 THIRD IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2021) | 2021年
基金
美国国家科学基金会;
关键词
federated learning; object detection; data poisoning; deep neural networks; DEFENSES;
D O I
10.1109/TPSISA52974.2021.00017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning (FL) enables decentralized training of deep neural networks (DNNs) for object detection over a distributed population of clients. It allows edge clients to keep their data local and only share parameter updates with a federated server. However, the distributed nature of FL also opens doors to new threats. In this paper, we present targeted perception poisoning attacks against federated object detection learning in which a subset of malicious clients seeks to poison the federated training of a global object detection model by sharing perception-poisoned local model parameters. We first introduce three targeted perception poisoning attacks, which have severe adverse effects only on the objects under attack. We then analyze the attack feasibility, the impact of malicious client availability, and attack timing. To safeguard FL systems against such contagious threats, we introduce spatial signature analysis as a defense to separate benign local model parameters from poisoned local model contributions, identify malicious clients, and eliminate their impact on the federated training. Extensive experiments on object detection benchmark datasets validate that the defense-empowered federated object detection learning can improve the robustness against all three types of perception poisoning attacks. The source code is available at https://github.com/git-disl/Perception-Poisoning.
引用
收藏
页码:146 / 155
页数:10
相关论文
共 35 条
  • [1] Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
  • [2] Bhagoji AN, 2019, PR MACH LEARN RES, V97
  • [3] Bonawitz K., 2019, P MACHINE LEARNING S
  • [4] Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach
    Chen, Sen
    Xue, Minhui
    Fan, Lingling
    Hao, Shuang
    Xu, Lihua
    Zhu, Haojin
    Li, Bo
    [J]. COMPUTERS & SECURITY, 2018, 73 : 326 - 344
  • [5] Chow K. -H., 2020, IEEE TPS ISA
  • [6] Chow K. -H., 2020, ESORICS
  • [7] Chow K.-H., 2021, ACM SIGKDD
  • [8] Histograms of oriented gradients for human detection
    Dalal, N
    Triggs, B
    [J]. 2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 886 - 893
  • [9] Demontis A, 2019, PROCEEDINGS OF THE 28TH USENIX SECURITY SYMPOSIUM, P321
  • [10] Everingham M., 2010, INT J COMPUT VISION, V88, P303, DOI DOI 10.1007/s11263-009-0275-4