LLaRA: Large Language-Recommendation Assistant

被引:3
作者
Liao, Jiayi [1 ]
Li, Sihang [1 ]
Yang, Zhengyi [1 ]
Wu, Jiancan [1 ]
Yuan, Yancheng [2 ]
Wang, Xiang [1 ,3 ]
He, Xiangnan [1 ,3 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Hong Kong Polytech Univ, Hong Kong, Peoples R China
[3] Hefei Comprehens Natl Sci Ctr, Inst Dataspace, Hefei, Peoples R China
来源
PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024 | 2024年
基金
中国国家自然科学基金;
关键词
Sequential Recommendation; Large Language Models; Curriculum Learning; Hybrid Prompting;
D O I
10.1145/3626772.3657690
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sequential recommendation aims to predict users' next interaction with items based on their past engagement sequence. Recently, the advent of Large Language Models (LLMs) has sparked interest in leveraging them for sequential recommendation, viewing it as language modeling. Previous studies represent items within LLMs' input prompts as either ID indices or textual metadata. However, these approaches often fail to either encapsulate comprehensive world knowledge or exhibit sufficient behavioral understanding. To combine the complementary strengths of conventional recommenders in capturing behavioral patterns of users and LLMs in encoding world knowledge about items, we introduce Large Language-Recommendation Assistant (LLaRA). Specifically, it uses a novel hybrid prompting method that integrates ID-based item embeddings learned by traditional recommendation models with textual item features. Treating the "sequential behaviors of users" as a distinct modality beyond texts, we employ a projector to align the traditional recommender's ID embeddings with the LLM's input space. Moreover, rather than directly exposing the hybrid prompt to LLMs, a curriculum learning strategy is adopted to gradually ramp up training complexity. Initially, we warm up the LLM using text-only prompts, which better suit its inherent language modeling ability. Subsequently, we progressively transition to the hybrid prompts, training the model to seamlessly incorporate the behavioral knowledge from the traditional sequential recommender into the LLM. Empirical results validate the effectiveness of our proposed framework.
引用
收藏
页码:1785 / 1795
页数:11
相关论文
共 58 条
  • [1] Alayrac Jean-Baptiste, 2022, NeurIPS
  • [2] TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation
    Bao, Keqin
    Zhang, Jizhi
    Zhang, Yang
    Wang, Wenjie
    Feng, Fuli
    He, Xiangnan
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 1007 - 1014
  • [3] Bengio Y., 2009, P 26 ANN INT C MACH, P41, DOI 10.1145/1553374.1553380
  • [4] Brown T. B., 2020, ARXIV
  • [5] Cantador I., 2011, P 5 ACM C RECOMMENDE
  • [6] Chiang W.L., 2023, Vicuna: An open -source chatbot impressing gpt-4 with 90%* chatgpt quality
  • [7] Cui Jingyi, 2023, ARXIV
  • [8] Cui Z. J., 2022, ARXIV
  • [9] Uncovering ChatGPT's Capabilities in Recommender Systems
    Dai, Sunhao
    Shao, Ninglu
    Zhao, Haiyuan
    Yu, Weijie
    Si, Zihua
    Xu, Chen
    Sun, Zhongxiang
    Zhang, Xiao
    Xu, Jun
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 1126 - 1132
  • [10] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171