Triple Adversarial Learning for Influence based Poisoning Attack in Recommender Systems

被引:40
作者
Wu, Chenwang [1 ]
Lian, Defu [1 ,2 ]
Ge, Yong [3 ]
Zhu, Zhihao [1 ]
Chen, Enhong [1 ,2 ]
机构
[1] Univ Sci & Technol China, Sch Data Sci, Hefei, Peoples R China
[2] Univ Sci & Technol China, Sch Comp Sci & Technol, Hefei, Peoples R China
[3] Univ Arizona, Tucson, AZ 85721 USA
来源
KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING | 2021年
基金
中国国家自然科学基金;
关键词
Poisoning Attacks; Recommender Systems; Adversarial Learning;
D O I
10.1145/3447548.3467335
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As an important means to solve information overload, recommender systems have been widely applied in many fields, such as e-commerce and advertising. However, recent studies have shown that recommender systems are vulnerable to poisoning attacks; that is, injecting a group of carefully designed user profiles into the recommender system can severely affect recommendation quality. Despite the development from shilling attacks to optimization-based attacks, the imperceptibility and harmfulness of the generated data in most attacks are arduous to balance. To this end, we propose a triple adversarial learning for influence based poisoning attack (TrialAttack), a flexible end-to-end poisoning framework to generate non-notable and harmful user profiles. Specifically, given the input noise, TrialAttack directly generates malicious users through triple adversarial learning of the generator, discriminator, and influence module. Besides, to provide reliable influence for TrialAttack training, we explore a new approximation approach for estimating each fake user's influence. Through theoretical analysis, we prove that the distribution characterized by TrialAttack approximates to the rating distribution of real users under the premise of performing an efficient attack. This property allows the injected users to attack in an unremarkable way. Experiments on three real-world datasets show that TrialAttack's attack performance outperforms state-of-the-art attacks, and the generated fake profiles are more difficult to detect compared to baselines.
引用
收藏
页码:1830 / 1840
页数:11
相关论文
共 41 条
[1]  
Agarwal N, 2017, J MACH LEARN RES, V18
[2]   Quick and Accurate Attack Detection in Recommender Systems through User Attributes [J].
Aktukmak, Mehmet ;
Yilmaz, Yasin ;
Uysal, Ismail .
RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, :348-352
[3]  
Arjovsky M., 2017, ARXIV170107875
[4]  
Burke R.J., 2006, Inspiring leaders, P17
[5]   CFGAN: A Generic Collaborative Filtering Framework based on Generative Adversarial Networks [J].
Chae, Dong-Kyu ;
Kang, Jin-Soo ;
Kim, Sang-Wook ;
Lee, Jung-Tae .
CIKM'18: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, 2018, :137-146
[6]   Adversarial Tensor Factorization for Context-aware Recommendation [J].
Chen, Huiyuan ;
Li, Jing .
RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, :363-367
[7]  
Cheng Z., 2009, P 3 ACM C REC SYST, P141, DOI DOI 10.1145/1639714.1639739
[8]   Adversarial Attacks on an Oblivious Recommender [J].
Christakopoulou, Konstantina ;
Banerjee, Arindam .
RECSYS 2019: 13TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, 2019, :322-330
[9]   How Dataset Characteristics Affect the Robustness of Collaborative Recommendation Models [J].
Deldjoo, Yashar ;
Di Noia, Tommaso ;
Di Sciascio, Eugenio ;
Antonio, Felice .
PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, :951-960
[10]  
Fan W., 2020, ARXIV200508147