Large Language Model Ranker with Graph Reasoning for Zero-Shot Recommendation

被引:0
作者
Zhang, Xuan [1 ]
Wei, Chunyu [1 ]
Yan, Ruyu [1 ]
Fan, Yushun [1 ]
Jia, Zhixuan [1 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Dept Automat, Beijing, Peoples R China
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V | 2024年 / 15020卷
关键词
Large Language Model; Higher-order Information; Graph Reasoning; Recommender Systems;
D O I
10.1007/978-3-031-72344-5_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs), with their powerful reasoning abilities and extensive open-world knowledge, have substantially improved recommender systems by utilizing user interactions to provide personalized suggestions, particularly in zero-shot scenarios where prior training data is absent. However, existing approaches frequently fail to capture complex, higher-order information. In response to this limitation, we integrate user-item bipartite graph information into LLMs. This integration is challenging due to the inherent gaps between graph data and sequential text, as well as the input token limitations of LLMs. We propose a novel Graph Reasoning LLM Ranker framework for Zero-Shot Recommendation (G-LLMRanker) to overcome these challenges. Specifically, G-LLMRanker constructs a semantic tree enriched with higher-order information for each node in the graph and develops an instruction template to generate text sequences that LLMs can comprehend. Additionally, to address the input token limitations of LLMs, G-LLMRanker redefines the recommendation task as a conditional sorting task, where text sequences augmented by graph information serve as conditions, and the items selected through a Mixture of Experts approach act as candidates. Experiments on public datasets demonstrate that G-LLMRanker significantly outperforms zero-shot baselines in recommendation tasks.
引用
收藏
页码:356 / 370
页数:15
相关论文
共 53 条
  • [21] Personalized Prompt Learning for Explainable Recommendation
    Li, Lei
    Zhang, Yongfeng
    Chen, Li
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2023, 41 (04)
  • [22] Lin JH, 2024, Arxiv, DOI arXiv:2306.05817
  • [23] Liu JL, 2023, Arxiv, DOI [arXiv:2304.10149, 10.48550/arxiv.2304.10149, 10.48550/arXiv.2304.10149]
  • [24] Liu P, 2023, Arxiv, DOI arXiv:2302.03735
  • [25] Lyu H, 2024, Arxiv, DOI arXiv:2307.15780
  • [26] Man T, 2017, PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P2464
  • [27] Kipf TN, 2017, Arxiv, DOI arXiv:1609.02907
  • [28] Ni JM, 2019, 2019 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING AND THE 9TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (EMNLP-IJCNLP 2019), P188
  • [29] Paszke A, 2019, ADV NEUR IN, V32
  • [30] Petrov AV, 2023, Arxiv, DOI [arXiv:2306.11114, DOI 10.48550/ARXIV.2306.11114]