Meta-optimized Contrastive Learning for Sequential Recommendation

被引:20
作者
Qin, Xiuyuan [1 ]
Yuan, Huanhuan [1 ]
Zhao, Pengpeng [1 ]
Fang, Junhua [1 ]
Zhuang, Fuzhen [2 ,3 ]
Liu, Guanfeng [4 ]
Liu, Yanchi [5 ]
Sheng, Victor [6 ]
机构
[1] Soochow Univ, Suzhou, Peoples R China
[2] Beihang Univ, Inst Artificial Intelligence, Beijing, Peoples R China
[3] Beihang Univ, SKLSDE, Beijing, Peoples R China
[4] Macquarie Univ, Sydney, NSW, Australia
[5] Rutgers State Univ, New Brunswick, NJ USA
[6] Texas Tech Univ, Lubbock, TX 79409 USA
来源
PROCEEDINGS OF THE 46TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2023 | 2023年
关键词
Sequential Recommendation; Contrastive Learning; Meta Learning; DROPOUT;
D O I
10.1145/3539618.3591727
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Contrastive Learning (CL) performances as a rising approach to address the challenge of sparse and noisy recommendation data. Although having achieved promising results, most existing CL methods only perform either hand-crafted data or model augmentation for generating contrastive pairs to find a proper augmentation operation for different datasets, which makes the model hard to generalize. Additionally, since insufficient input data may lead the encoder to learn collapsed embeddings, these CL methods expect a relatively large number of training data (e.g., large batch size or memory bank) to contrast. However, not all contrastive pairs are always informative and discriminative enough for the training processing. Therefore, a more general CL-based recommendation model called Meta-optimized Contrastive Learning for sequential Recommendation (MCLRec) is proposed in this work. By applying both data augmentation and learnable model augmentation operations, this work innovates the standard CL framework by contrasting data and model augmented views for adaptively capturing the informative features hidden in stochastic data augmentation. Moreover, MCLRec utilizes a meta-learning manner to guide the updating of the model augmenters, which helps to improve the quality of contrastive pairs without enlarging the amount of input data. Finally, a contrastive regularization term is considered to encourage the augmentation model to generate more informative augmented views and avoid too similar contrastive pairs within the meta updating. The experimental results on commonly used datasets validate the effectiveness of MCLRec(1).
引用
收藏
页码:89 / 98
页数:10
相关论文
共 49 条
[1]  
[Anonymous], 2010, P 19 INT C WORLD WID, DOI DOI 10.1145/1772690.1772773
[2]  
Boluki S, 2020, PR MACH LEARN RES, V108, P3905
[3]   Sequential Recommendation with Graph Neural Networks [J].
Chang, Jianxin ;
Gao, Chen ;
Zheng, Yu ;
Hui, Yiqun ;
Niu, Yanan ;
Song, Yang ;
Jin, Depeng ;
Li, Yong .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :378-387
[4]  
Chen T, 2020, PR MACH LEARN RES, V119
[5]   BERT4SessRec: Content-Based Video Relevance Prediction with Bidirectional Encoder Representations from Transformer [J].
Chen, Xusong ;
Liu, Dong ;
Lei, Chenyi ;
Li, Rui ;
Zha, Zheng-Jun ;
Xiong, Zhiwei .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :2597-2601
[6]   Intent Contrastive Learning for Sequential Recommendation [J].
Chen, Yongjun ;
Liu, Zhiwei ;
Li, Jia ;
McAuley, Julian ;
Xiong, Caiming .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :2172-2182
[7]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[8]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[9]   Sequential Scenario-Specific Meta Learner for Online Recommendation [J].
Du, Zhengxiao ;
Wang, Xiaowei ;
Yang, Hongxia ;
Zhou, Jingren ;
Tang, Jie .
KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, :2895-2904
[10]   Sequential Recommendation via Stochastic Self-Attention [J].
Fan, Ziwei ;
Liu, Zhiwei ;
Wang, Yu ;
Wang, Alice ;
Nazari, Zahra ;
Zheng, Lei ;
Peng, Hao ;
Yu, Philip S. .
PROCEEDINGS OF THE ACM WEB CONFERENCE 2022 (WWW'22), 2022, :2036-2047