A multi-intent based multi-policy relay contrastive learning for sequential recommendation

被引:0
作者
Di W. [1 ]
机构
[1] School of Computer and Information Technology, Beijing Jiaotong University, Beijing, Beijing
关键词
Contrastive learning; Sequential recommendation;
D O I
10.7717/PEERJ-CS.1088
中图分类号
学科分类号
摘要
Sequential recommendations have become a trending study for their ability to capture dynamic user preference. However, when dealing with sparse data, they still fall short of expectations. The recent contrastive learning (CL) has shown potential in mitigating the issue of data sparsity. Many item representations are destined to be poorly learned due to data sparsity. It is better to pay more attention to learn a set of influential latent intents that have greater impacts on the sequence evolution. In this article, we devise a novel multi-intent self-attention module, which modifies the selfattention mechanism to break down the user behavior sequences to multiple latent intents that identify the different tastes and inclinations of users. In addition to the above change in the model architecture, we also extend in dealing with multiple contrastive tasks. Specifically, some data augmentations in CL can be very different. Together they cannot cooperate well, and may stumbling over each other. To solve this problem, we propose a multi-policy relay training strategy, which divides the training into multiple stages based on the number of data augmentations. In each stage we optimize the relay to the best on the basis of the previous stage. This can combine the advantage of different schemes and make the best use of them. Experiments on four public recommendation datasets demonstrate the superiority of our model. © Copyright 2022 Di
引用
收藏
相关论文
共 43 条
[1]  
Cen Y, Zhang J, Zou X, Zhou C, Yang H, Tang J., Controllable multi-interest framework for recommendation, (2020)
[2]  
Chen Y, Liu Z, Li J, McAuley J, Xiong C., Intent contrastive learning for sequential recommendation, Proceedings of the ACM Web Conference 2022, pp. 2172-2182, (2022)
[3]  
Devlin J, Chang M-W, Lee K, Toutanova K., Bert: pre-training of deep bidirectional transformers for language understanding, (2018)
[4]  
Ding K, Xu Z, Tong H, Liu H., Data augmentation for deep graph learning: a survey, (2022)
[5]  
Gutmann MU, Hyvarinen A., Noise-contrastive estimation of unnormalized statistical models, with applications to natural image statistics, Journal of Machine Learning Research, 13, 2, pp. 307-361, (2012)
[6]  
Harper FM, Konstan JA., The MovieLens datasets: history and context, ACM Transactions on Interactive Intelligent Systems, 5, 4, pp. 1-19, (2015)
[7]  
He Z, Zhao H, Lin Z, Wang Z, Kale A, Mcauley J., Locker: locally constrained self-attentive sequential recommendation, Proceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 3088-3092, (2021)
[8]  
Hidasi B, Karatzoglou A, Baltrunas L, Tikk D., Session-based recommendations with recurrent neural networks, (2015)
[9]  
Hjelm RD, Fedorov A, Lavoie-Marchildon S, Grewal K, Bachman P, Trischler A, Bengio Y., Learning deep representations by mutual information estimation and maximization, (2018)
[10]  
Jaiswal A, Babu AR, Zadeh MZ, Banerjee D, Makedon F., A survey on contrastive selfsupervised learning, Technologies, 9, 1, (2021)