ClusterPoison: Poisoning Attacks on Recommender Systems with Limited Fake Users

被引:0
作者
Wang, Yanling [1 ,2 ]
Liu, Yuchen [1 ,2 ]
Wang, Qian [3 ]
Wang, Cong [4 ]
机构
[1] Wuhan Univ, Sch Cyber Sci & Engn, Minist Educ, Key Lab Aerosp Informat Secur & Trusted Comp, Wuhan, Peoples R China
[2] City Univ Hong Kong, Dept Comp Sci, Hong Kong, Peoples R China
[3] Wuhan Univ, Wuhan, Peoples R China
[4] City Univ Hong Kong, Hong Kong, Peoples R China
关键词
Recommender systems; Data models; Training data; Clustering algorithms; Closed box; Adaptation models; Computational modeling;
D O I
10.1109/MCOM.001.2300558
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Numerous prior studies on poisoning recommender systems have demonstrated that with access to a user's historical data, an attacker can significantly influence the decision-making process of the recommendation model. However, these studies often assume that the attacker can control fake users amounting to 1% of the real users on the recommendation platform. When dealing with real-world large-scale recommendation platforms hosting millions of users, manipulating such a vast number of fake users without detection by the platform becomes exceedingly challenging. This limitation implies that large-scale recommendation platforms can ignore the impact of poisoning attacks. Motivated by this observation, our article aims to explore the potential impact of an attacker when the number of fake users is extremely limited. Specifically, assuming the attacker controls only one fake user, we design a clustering-based scheme for generating fake users. Our scheme serves as a flexible widget that can be integrated into various poisoning attacks against deep learning-based recommender systems, enhancing their effectiveness when dealing with limited fake users. We demonstrate that combining our approach with two different poisoning attacks results in improved performance under the constraint of minimal fake users. Through experimentation on the Amazon Beauty dataset (22,363 users) and Amazon Sports dataset (35,598 users), we highlight the poisoning impact of this seemingly negligible single fake user. Our findings emphasize the threat of poisoning attacks with a very small set of fake users and call for their stronger defense in real-world recommender systems. Our code is publicly available at https://github.com/yanling02/ClusterPoison.
引用
收藏
页码:136 / 142
页数:7
相关论文
共 15 条
  • [1] Chen, 2022, PROC ACM SIGKDD
  • [2] Huang H., 2021, PROC NDSS
  • [3] Li B., 2016, PROC NEURIPS, V5
  • [4] Likas A, 2003, PATTERN RECOGN, V36, P451, DOI 10.1016/S0031-3203(02)00060-2
  • [5] DBSCAN Revisited, Revisited: Why and How You Should (Still) Use DBSCAN
    Schubert, Erich
    Sander, Jorg
    Ester, Martin
    Kriegel, Hans-Peter
    Xu, Xiaowei
    [J]. ACM TRANSACTIONS ON DATABASE SYSTEMS, 2017, 42 (03):
  • [6] A Comprehensive Survey on Poisoning Attacks and Countermeasures in Machine Learning
    Tian, Zhiyi
    Cui, Lei
    Liang, Jie
    Yu, Shui
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [7] Viswanath B, 2014, PROCEEDINGS OF THE 23RD USENIX SECURITY SYMPOSIUM, P223
  • [8] Wang, 2021, PROC ACM SIGIR
  • [9] Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems
    Wu, Chenwang
    Lian, Defu
    Ge, Yong
    Zhu, Zhihao
    Chen, Enhong
    [J]. KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 1830 - 1840
  • [10] A review on algorithms for maximum clique problems
    Wu, Qinghua
    Hao, Jin-Kao
    [J]. EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2015, 242 (03) : 693 - 709