FedRecAttack: Model Poisoning Attack to Federated Recommendation

被引:43
作者
Rong, Dazhong [1 ]
Ye, Shuai [2 ]
Zhao, Ruoyan [2 ]
Yuen, Hon Ning [1 ]
Chen, Jianhai [1 ]
He, Qinming [1 ]
机构
[1] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[2] Zhejiang Univ, Polytech Inst, Hangzhou, Peoples R China
来源
2022 IEEE 38TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2022) | 2022年
基金
国家重点研发计划;
关键词
Recommender System; Federated Recommendation; Federated Learning; Poisoning Attack;
D O I
10.1109/ICDE53745.2022.00243
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated Recommendation (FR) has received considerable popularity and attention in the past few years. In FR, for each user, its feature vector and interaction data are kept locally on its own client thus are private to others. Without the access to above information, most existing poisoning attacks against recommender systems or federated learning lose validity. Benifiting from this characteristic, FR is commonly considered fairly secured. However, we argue that there is still possible and necessary security improvement could be made in FR. To prove our opinion, in this paper we present FedRecAttack, a model poisoning attack to FR aiming to raise the exposure ratio of target items. In most recommendation scenarios, apart from private user-item interactions (e.g., clicks, watches and purchases), some interactions are public (e.g., likes, follows and comments). Motivated by this point, in FedRecAttack we make use of the public interactions to approximate users' feature vectors, thereby attacker can generate poisoned gradients accordingly and control malicious users to upload the poisoned gradients in a well-designed way. To evaluate the effectiveness and side effects of FedRecAttack, we conduct extensive experiments on three real-world datasets of different sizes from two completely different scenarios. Experimental results demonstrate that our proposed FedRecAttack achieves the state-of-the-art effectiveness while its side effects are negligible. Moreover, even with small proportion (3%) of malicious users and small proportion (1%) of public interactions, FedRecAttack remains highly effective, which reveals that FR is more vulnerable to attack than people commonly considered.
引用
收藏
页码:2643 / 2655
页数:13
相关论文
共 52 条
[1]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[2]   A Neural Collaborative Filtering Model with Interaction-based Neighborhood [J].
Bai, Ting ;
Wen, Ji-Rong ;
Zhang, Jun ;
Zhao, Wayne Xin .
CIKM'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2017, :1979-1982
[3]  
Baruch M, 2019, ADV NEUR IN, V32
[4]  
Bhagoji AN, 2019, PR MACH LEARN RES, V97
[5]   2021 State of the Practice in Data Privacy and Security [J].
Chauhan, Preeti S. ;
Kshetri, Nir .
COMPUTER, 2021, 54 (08) :125-132
[6]   Recommender Systems for Online Video Game Platforms: the Case of STEAM [J].
Cheuque, German ;
Guzman, Jose ;
Parra, Denis .
COMPANION OF THE WORLD WIDE WEB CONFERENCE (WWW 2019 ), 2019, :763-771
[7]  
din M. Ammad-ud, 2019, Federated collaborative filtering for privacy-preserving personalized recommendation system
[8]  
Fang MH, 2020, PROCEEDINGS OF THE 29TH USENIX SECURITY SYMPOSIUM, P1623
[9]   Influence Function based Data Poisoning Attacks to Top-N Recommender Systems [J].
Fang, Minghong ;
Gong, Neil Zhenqiang ;
Liu, Jia .
WEB CONFERENCE 2020: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW 2020), 2020, :3019-3025
[10]   Poisoning Attacks to Graph-Based Recommender Systems [J].
Fang, Minghong ;
Yang, Guolei ;
Gong, Neil Zhenqiang ;
Liu, Jia .
34TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE (ACSAC 2018), 2018, :381-392