CALRec: Contrastive Alignment of Generative LLMs for Sequential Recommendation

被引:0
|
作者
Li, Yaoyiran [1 ,2 ]
Zhai, Xiang [2 ]
Alzantot, Moustafa [2 ]
Yu, Keyi [2 ]
Vulic, Ivan [1 ]
Korhonen, Anna [1 ]
Hammad, Mohamed [2 ]
机构
[1] Univ Cambridge, Cambridge, England
[2] Google, Mountain View, CA 94043 USA
来源
PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024 | 2024年
关键词
Sequential Recommendation; Large Language Models; Contrastive Learning;
D O I
10.1145/3640457.3688121
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Traditional recommender systems such as matrix factorization methods have primarily focused on learning a shared dense embedding space to represent both items and user preferences. Subsequently, sequence models such as RNN, GRUs, and, recently, Transformers have emerged and excelled in the task of sequential recommendation. This task requires understanding the sequential structure present in users' historical interactions to predict the next item they may like. Building upon the success of Large Language Models (LLMs) in a variety of tasks, researchers have recently explored using LLMs that are pretrained on vast corpora of text for sequential recommendation. To use LLMs for sequential recommendation, both the history of user interactions and the model's prediction of the next item are expressed in text form. We propose CALRec, a two-stage LLM finetuning framework that finetunes a pretrained LLM in a two-tower fashion using a mixture of two contrastive losses and a language modeling loss: the LLM is first finetuned on a data mixture from multiple domains followed by another round of target domain finetuning. Our model significantly outperforms many state-of-the-art baselines (+37% in Recall@1 and +24% in NDCG@10) and our systematic ablation studies reveal that (i) both stages of finetuning are crucial, and, when combined, we achieve improved performance, and (ii) contrastive alignment is effective among the target domains explored in our experiments.
引用
收藏
页码:422 / 432
页数:11
相关论文
共 50 条
  • [31] Multi-behavior collaborative contrastive learning for sequential recommendation
    Chen, Yuzhe
    Cao, Qiong
    Huang, Xianying
    Zou, Shihao
    COMPLEX & INTELLIGENT SYSTEMS, 2024, 10 (04) : 5033 - 5048
  • [32] Item attributes fusion based on contrastive learning for sequential recommendation
    Zhang, Donghao
    Qin, Jiwei
    Ma, Jie
    Yang, Zhibin
    Cui, Daishun
    Ji, Peichen
    MULTIMEDIA SYSTEMS, 2024, 30 (05)
  • [33] Contrastive Cross-Domain Sequential Recommendation
    Cao, Jiangxia
    Cong, Xin
    Sheng, Jiawei
    Liu, Tingwen
    Wang, Bin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 138 - 147
  • [34] Simple Debiased Contrastive Learning for Sequential Recommendation
    Xie, Zuxiang
    Li, Junyi
    KNOWLEDGE-BASED SYSTEMS, 2024, 300
  • [35] Knowledge-Guided Semantically Consistent Contrastive Learning for sequential recommendation
    Shi, Chenglong
    Yan, Surong
    Zhang, Shuai
    Wang, Haosen
    Lin, Kwei-Jay
    NEURAL NETWORKS, 2025, 185
  • [36] HyperCLR: A Personalized Sequential Recommendation Algorithm Based on Hypergraph and Contrastive Learning
    Zhang, Ruiqi
    Wang, Haitao
    He, Jianfeng
    MATHEMATICS, 2024, 12 (18)
  • [37] Contrastive Learning with Frequency-Domain Interest Trends for Sequential Recommendation
    Zhang, Yichi
    Yin, Guisheng
    Dong, Yuxin
    PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 141 - 150
  • [38] Graph Neural Network-Guided Contrastive Learning for Sequential Recommendation
    Yang, Xing-Yao
    Xu, Feng
    Yu, Jiong
    Li, Zi-Yang
    Wang, Dong-Xiao
    SENSORS, 2023, 23 (12)
  • [39] Multi-interest sequential recommendation with contrastive learning and temporal analysis
    Ma, Xiaowen
    Zhou, Qiang
    Li, Yongjun
    KNOWLEDGE-BASED SYSTEMS, 2024, 305
  • [40] Temporal Density-aware Sequential Recommendation Networks with Contrastive Learning
    Wang, Jihu
    Shi, Yuliang
    Yu, Han
    Zhang, Kun
    Wang, Xinjun
    Yan, Zhongmin
    Li, Hui
    EXPERT SYSTEMS WITH APPLICATIONS, 2023, 211