Large Language Model Ranker with Graph Reasoning for Zero-Shot Recommendation

被引:0
作者
Zhang, Xuan [1 ]
Wei, Chunyu [1 ]
Yan, Ruyu [1 ]
Fan, Yushun [1 ]
Jia, Zhixuan [1 ]
机构
[1] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRis, Dept Automat, Beijing, Peoples R China
来源
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V | 2024年 / 15020卷
关键词
Large Language Model; Higher-order Information; Graph Reasoning; Recommender Systems;
D O I
10.1007/978-3-031-72344-5_24
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large Language Models (LLMs), with their powerful reasoning abilities and extensive open-world knowledge, have substantially improved recommender systems by utilizing user interactions to provide personalized suggestions, particularly in zero-shot scenarios where prior training data is absent. However, existing approaches frequently fail to capture complex, higher-order information. In response to this limitation, we integrate user-item bipartite graph information into LLMs. This integration is challenging due to the inherent gaps between graph data and sequential text, as well as the input token limitations of LLMs. We propose a novel Graph Reasoning LLM Ranker framework for Zero-Shot Recommendation (G-LLMRanker) to overcome these challenges. Specifically, G-LLMRanker constructs a semantic tree enriched with higher-order information for each node in the graph and develops an instruction template to generate text sequences that LLMs can comprehend. Additionally, to address the input token limitations of LLMs, G-LLMRanker redefines the recommendation task as a conditional sorting task, where text sequences augmented by graph information serve as conditions, and the items selected through a Mixture of Experts approach act as candidates. Experiments on public datasets demonstrate that G-LLMRanker significantly outperforms zero-shot baselines in recommendation tasks.
引用
收藏
页码:356 / 370
页数:15
相关论文
共 53 条
  • [1] Barkan Oren, 2016, IEEE INT WORKSHOP MA
  • [2] Chen Z, 2023, Arxiv, DOI arXiv:2305.07622
  • [3] Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
  • [4] Real-time Personalization using Embeddings for Search Ranking at Airbnb
    Grbovic, Mihajlo
    Cheng, Haibin
    [J]. KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 311 - 320
  • [5] Guo HF, 2017, Arxiv, DOI arXiv:1703.04247
  • [6] Guo JY, 2023, Arxiv, DOI arXiv:2305.15066
  • [7] The MovieLens Datasets: History and Context
    Harper, F. Maxwell
    Konstan, Joseph A.
    [J]. ACM TRANSACTIONS ON INTERACTIVE INTELLIGENT SYSTEMS, 2016, 5 (04)
  • [8] LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation
    He, Xiangnan
    Deng, Kuan
    Wang, Xiang
    Li, Yan
    Zhang, Yongdong
    Wang, Meng
    [J]. PROCEEDINGS OF THE 43RD INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL (SIGIR '20), 2020, : 639 - 648
  • [9] Hidasi B, 2016, Arxiv, DOI [arXiv:1511.06939, 10.48550/arxiv.1511.06939]
  • [10] Hou YP, 2024, Arxiv, DOI arXiv:2305.08845