Enhancing the Robustness of Neural Collaborative Filtering Systems Under Malicious Attacks

被引:35
作者
Du, Yali [1 ,2 ,3 ,4 ]
Fang, Meng [5 ]
Yi, Jinfeng [6 ]
Xu, Chang [2 ,3 ]
Cheng, Jun [4 ,7 ]
Tao, Dacheng [2 ,3 ]
机构
[1] Univ Technol Sydney, Fac Engn & Informat Technol, Ultimo, NSW 2007, Australia
[2] Univ Sydney, Fac Engn & Informat Technol, UBTECH Sydney Artificial Intelligence Ctr, Darlington, NSW 2008, Australia
[3] Univ Sydney, Fac Engn & Informat Technol, Sch Informat Technol, Darlington, NSW 2008, Australia
[4] Chinese Acad Sci, Shenzhen Inst Adv Technol, Shenzhen Key Lab Virtual Real & Human Interact Te, Shenzhen 518055, Peoples R China
[5] Tencent AI Lab, Shenzhen 518057, Peoples R China
[6] JD AI Res, Beijing 100020, Peoples R China
[7] Chinese Univ Hong Kong, Hong Kong, Peoples R China
基金
中国国家自然科学基金; 澳大利亚研究理事会;
关键词
Recommendation systems; adversarial learning; collaborative filtering; malicious attacks; MATRIX FACTORIZATION; RECOMMENDATION;
D O I
10.1109/TMM.2018.2887018
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recommendation systems have become ubiquitous in online shopping in recent decades due to their power in reducing excessive choices of customers and industries. Recent collaborative filtering methods based on the deep neural network are studied and introduce promising results due to their power in learning hidden representations for users and items. However, it has revealed its vulnerabilities under malicious user attacks. With the knowledge of a collaborative filtering algorithm and its parameters, the performance of this recommendation system can be easily downgraded. Unfortunately, this problem is not addressed well, and the study on defending recommendation systems is insufficient. In this paper, we aim to improve the robustness of recommendation systems based on two concepts-stage-wise hints training and randomness. To protect a target model, we introduce noise layers in the training of a target model to increase its resistance to adversarial perturbations. To reduce the noise layers' influence on model performance, we introduce intermediate layer outputs as hints from a teacher model to regularize the intermediate layers of a student target model. We consider white box attacks under which attackers have the knowledge of the target model. The generalizability and robustness properties of our method have been analytically inspected in experiments and discussions, and the computational cost is comparable to training a standard neural network-based collaborative filtering model. Through our investigation, the proposed defensive method can reduce the success rate of malicious user attacks and keep the prediction accuracy comparable to standard neural recommendation systems.
引用
收藏
页码:555 / 565
页数:11
相关论文
共 59 条
  • [1] Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
  • [2] [Anonymous], P IEEE INT C BIG DAT
  • [3] [Anonymous], 2018, P 24 ACM SIGKDD INT
  • [4] [Anonymous], 2007, P 24 INT C MACHINE L
  • [5] [Anonymous], 2013, ARXIV13060626
  • [6] [Anonymous], 2015, ADV NEURAL INFORM PR
  • [7] [Anonymous], P FLOR ART INT RES S
  • [8] [Anonymous], J MACH LEARN RES
  • [9] [Anonymous], 2018, PROC INT C LEARN REP
  • [10] [Anonymous], P IEEE S SECUR PRIV