PipAttack: Poisoning Federated Recommender Systems for Manipulating Item Promotion

被引:60
作者
Zhang, Shijie [1 ]
Yin, Hongzhi [1 ]
Chen, Tong [1 ]
Huang, Zi [1 ]
Quoc Viet Hung Nguyen [2 ]
Cui, Lizhen [3 ]
机构
[1] Univ Queensland, Brisbane, Qld, Australia
[2] Griffith Univ, Nathan, Qld, Australia
[3] Shandong Univ, Jinan, Peoples R China
来源
WSDM'22: PROCEEDINGS OF THE FIFTEENTH ACM INTERNATIONAL CONFERENCE ON WEB SEARCH AND DATA MINING | 2022年
基金
澳大利亚研究理事会;
关键词
Federated Recommender System; Poisoning Attack; Deep Learning;
D O I
10.1145/3488560.3498386
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the growing privacy concerns, decentralization emerges rapidly in personalized services, especially recommendation. Also, recent studies have shown that centralized models are vulnerable to poisoning attacks, compromising their integrity. In the context of recommender systems, a typical goal of such poisoning attacks is to promote the adversary's target items by interfering with the training dataset and/or process. Hence, a common practice is to subsume recommender systems under the decentralized federated learning paradigm, which enables all user devices to collaboratively learn a global recommender while retaining all the sensitive data locally. Without exposing the full knowledge of the recommender and entire dataset to end-users, such federated recommendation is widely regarded 'safe' towards poisoning attacks. In this paper, we present a systematic approach to backdooring federated recommender systems for targeted item promotion. The core tactic is to take advantage of the inherent popularity bias that commonly exists in data-driven recommenders. As popular items are more likely to appear in the recommendation list, our innovatively designed attack model enables the target item to have the characteristics of popular items in the embedding space. Then, by uploading carefully crafted gradients via a small number of malicious users during the model update, we can e"ectively increase the exposure rate of a target (unpopular) item in the resulted federated recommender. Evaluations on two real-world datasets show that 1) our attack model signi#cantly boosts the exposure rate of the target item in a stealthy way, without harming the accuracy of the poisoned recommender; and 2) existing defenses are not e"ective enough, highlighting the need for new defenses against our local model poisoning attacks to federated recommender systems.
引用
收藏
页码:1415 / 1423
页数:9
相关论文
共 42 条
[1]  
Abdollahpouri H., 2019, RMSE WORKSH
[2]   Controlling Popularity Bias in Learning-to-Rank Recommendation [J].
Abdollahpouri, Himan ;
Burke, Robin ;
Mobasher, Bamshad .
PROCEEDINGS OF THE ELEVENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS'17), 2017, :42-46
[3]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[4]  
Baruch M, 2019, ADV NEUR IN, V32
[5]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[6]  
Blanchard P, 2017, ADV NEUR IN, V30
[7]  
Burke R., 2005, P 19 INT JOINT C ART, P17
[8]   Secure Federated Matrix Factorization [J].
Chai, Di ;
Wang, Leye ;
Chen, Kai ;
Yang, Qiang .
IEEE INTELLIGENT SYSTEMS, 2021, 36 (05) :11-19
[9]   Adversarial Attacks on an Oblivious Recommender [J].
Christakopoulou, Konstantina ;
Banerjee, Arindam .
RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, :322-330
[10]  
El Mhamdi E. M., 2018, PR MACH LEARN RES, P3518