Defending against Poisoning Backdoor Attacks on Federated Meta-learning

被引:6
作者
Chen, Chien-Lun [1 ]
Babakniya, Sara [1 ]
Paolieri, Marco [1 ]
Golubchik, Leana [1 ]
机构
[1] Univ Southern Calif, 941 Bloom Walk, Los Angeles, CA 90089 USA
基金
美国国家科学基金会;
关键词
Federated learning; meta-learning; poisoning attacks; backdoor attacks; matching networks; attention mechanism; security and privacy; PRIVACY;
D O I
10.1145/3523062
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning allows multiple users to collaboratively train a shared classification model while preserving data privacy. This approach, where model updates are aggregated by a central server, was shown to be vulnerable to poisoning backdoor attacks: a malicious user can alter the shared model to arbitrarily classify specific inputs from a given class. In this article, we analyze the effects of backdoor attacks on federated meta-learning, where users train a model that can be adapted to different sets of output classes using only a few examples. While the ability to adapt could, in principle, make federated learning frameworks more robust to backdoor attacks (when new training examples are benign), we find that even one-shot attacks can be very successful and persist after additional training. To address these vulnerabilities, we propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features with a support set of labeled examples. By removing the decision logic from the model shared with the federation, the success and persistence of backdoor attacks are greatly reduced.
引用
收藏
页数:25
相关论文
共 64 条
[1]  
Abadi M., 2016, CORRABS160304467
[2]   BaFFLe: Backdoor Detection via Feedback -based Federated Learning [J].
Andreina, Sebastien ;
Marson, Giorgia Azzurra ;
Moellering, Helen ;
Karame, Ghassan .
2021 IEEE 41ST INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2021), 2021, :852-863
[3]  
[Anonymous], 2015, P EMNLP, DOI DOI 10.18653/V1/D15-1166
[4]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[5]  
Bahdanau D, 2016, Arxiv, DOI arXiv:1409.0473
[6]  
Baruch M, 2019, ADV NEUR IN, V32
[7]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[8]  
Biggio B., 2012, INT C MACHINE LEARIN
[9]  
Blanchard P, 2017, ADV NEUR IN, V30
[10]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191