Unleashing the Retrieval Potential of Large Language Models in Conversational Recommender Systems

被引:1
作者
Yang, Ting [1 ]
Chen, Li [1 ]
机构
[1] Hong Kong Baptist Univ, Dept Comp Sci, Hong Kong, Peoples R China
来源
PROCEEDINGS OF THE EIGHTEENTH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2024 | 2024年
关键词
Conversational Recommender Systems; Retrievable Large Language Models; Instruction Tuning;
D O I
10.1145/3640457.3688146
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Conversational recommender systems (CRSs) aim to capture user preferences and provide personalized recommendations through interactive natural language interaction. The recent advent of large language models (LLMs) has revolutionized human engagement in natural conversation, driven by their extensive world knowledge and remarkable natural language understanding and generation capabilities. However, introducing LLMs into CRSs presents new technical challenges. Directly prompting LLMs for recommendation generation requires understanding a large and evolving item corpus, as well as grounding the generated recommendations in the real item space. On the other hand, generating recommendations based on external recommendation engines or directly integrating their suggestions into responses may constrain the overall performance of LLMs, since these engines generally have inferior representation abilities compared to LLMs. To address these challenges, we propose an end-to-end large-scale CRS model, named as ReFICR, a novel LLM-enhanced conversational recommender that empowers a retrievable large language model to perform conversational recommendation by following retrieval and generation instructions through lightweight tuning. By decomposing the complex CRS task into multiple subtasks, we formulate these subtasks into two types of instruction formats: retrieval and generation. The hidden states of ReFICR are utilized for generating text embeddings for retrieval, and simultaneously ReFICR is fine-tuned to handle generation subtasks. We optimize the contrastive objective to enhance text embeddings for retrieval and jointly fine-tune the large language model objective for generation. Our experimental results on public datasets demonstrate that ReFICR significantly outperforms baselines in terms of recommendation accuracy and response quality. Our code is publicly available at the link: https://github.com/yt556677/ReFICR.
引用
收藏
页码:43 / 52
页数:10
相关论文
共 52 条
  • [1] 2023, Arxiv, DOI [arXiv:2303.08774, 10.48550/arXiv.2303.08774]
  • [2] Hayati SA, 2020, Arxiv, DOI arXiv:2009.14306
  • [3] Asai A, 2022, Arxiv, DOI arXiv:2211.09260
  • [4] TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation
    Bao, Keqin
    Zhang, Jizhi
    Zhang, Yang
    Wang, Wenjie
    Feng, Fuli
    He, Xiangnan
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 1007 - 1014
  • [5] Bao KQ, 2023, Arxiv, DOI [arXiv:2308.08434, 10.48550/arXiv.2308.08434]
  • [6] DBpedia - A crystallization point for the Web of Data
    Bizer, Christian
    Lehmann, Jens
    Kobilarov, Georgi
    Auer, Soeren
    Becker, Christian
    Cyganiak, Richard
    Hellmann, Sebastian
    [J]. JOURNAL OF WEB SEMANTICS, 2009, 7 (03): : 154 - 165
  • [7] Predicting User Intents and Satisfaction with Dialogue-based Conversational Recommendations
    Cai, Wanling
    Chen, Li
    [J]. UMAP'20: PROCEEDINGS OF THE 28TH ACM CONFERENCE ON USER MODELING, ADAPTATION AND PERSONALIZATION, 2020, : 33 - 42
  • [8] Chen QB, 2019, Arxiv, DOI arXiv:1908.05391
  • [9] Chiang W.L., 2023, Vicuna: An open -source chatbot impressing gpt-4 with 90%* chatgpt quality
  • [10] Chung HW, 2024, J MACH LEARN RES, V25