Enhancing Sequential Recommenders with Augmented Knowledge from Aligned Large Language Models

被引:2
作者
Ren, Yankun [1 ]
Chen, Zhongde [1 ]
Yang, Xinxing [1 ]
Li, Longfei [1 ]
Jiang, Cong [1 ]
Cheng, Lei [1 ]
Zhang, Bo [1 ]
Mo, Linjian [1 ]
Zhou, Jun [1 ]
机构
[1] Ant Grp, Hangzhou, Peoples R China
来源
PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024 | 2024年
关键词
Sequential Recommendation; Large Language Models; Alignment;
D O I
10.1145/3626772.3657782
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recommender systems are widely used in various online platforms. In the context of sequential recommendation, it is essential to accurately capture the chronological patterns in user activities to generate relevant recommendations. Conventional ID-based sequential recommenders have shown promise but lack comprehensive real-world knowledge about items, limiting their effectiveness. Recent advancements in Large Language Models (LLMs) offer the potential to bridge this gap by leveraging the extensive real-world knowledge encapsulated in LLMs. However, integrating LLMs into sequential recommender systems comes with its own challenges, including inadequate representation of sequential behavior patterns and long inference latency. In this paper, we propose SeRALM (Enhancing Sequential Recommenders with Augmented Knowledge from Aligned Large Language Models) to address these challenges. SeRALM integrates LLMs with conventional ID-based sequential recommenders for sequential recommendation tasks. We combine text-format knowledge generated by LLMs with item IDs and feed this enriched data into ID-based recommenders, benefitting from the strengths of both paradigms. Moreover, we develop a theoretically underpinned alignment training method to refine LLMs' generation using feedback from ID-based recommenders for better knowledge augmentation. We also present an asynchronous technique to expedite the alignment training process. Experimental results on public benchmarks demonstrate that SeRALM significantly improves the performances of ID-based sequential recommenders. Further, a series of ablation studies and analyses corroborate SeRALM's proficiency in steering LLMs to generate more pertinent and advantageous knowledge across diverse scenarios.
引用
收藏
页码:345 / 354
页数:10
相关论文
共 33 条
[1]  
Bao KQ, 2023, Arxiv, DOI arXiv:2305.00447
[2]  
Chang E.Y., 2023, 10 INT C COMP SCI CO
[3]  
Cui Zeyu, 2022, arXiv
[4]  
Dai SH, 2023, Arxiv, DOI arXiv:2305.02182
[5]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[6]   Lighter and Better: Low-Rank Decomposed Self-Attention Networks for Next-Item Recommendation [J].
Fan, Xinyan ;
Liu, Zheng ;
Lian, Jianxun ;
Zhao, Wayne Xin ;
Xie, Xing ;
Wen, Ji-Rong .
SIGIR '21 - PROCEEDINGS OF THE 44TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, 2021, :1733-1737
[7]  
Gao YF, 2023, Arxiv, DOI [arXiv:2303.14524, 10.48550/arXiv.2303.14524, DOI 10.48550/ARXIV.2303.14524]
[8]   Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5) [J].
Geng, Shijie ;
Liu, Shuchang ;
Fu, Zuohui ;
Ge, Yingqiang ;
Zhang, Yongfeng .
PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, :299-315
[9]  
He RN, 2016, IEEE DATA MINING, P191, DOI [10.1109/ICDM.2016.0030, 10.1109/ICDM.2016.88]
[10]  
Hidasi B, 2016, Arxiv, DOI [arXiv:1511.06939, DOI 10.48550/ARXIV.1511.06939]