Large Language Models for Recommendation: Past, Present, and Future

被引:0
作者
Bao, Keqin [1 ]
Zhang, Jizhi [1 ]
Lin, Xinyu [2 ]
Zhang, Yang [1 ]
Wang, Wenjie [2 ]
Feng, Fuli [1 ]
机构
[1] Univ Sci & Technol China, Hefei, Peoples R China
[2] Natl Univ Singapore, Singapore, Singapore
来源
PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024 | 2024年
关键词
Large Language Models; Recommender Systems; Generative Recommendation; Generative Models;
D O I
10.1145/3626772.3661383
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large language models (LLMs) have significantly influenced recommender systems, spurring interest across academia and industry in leveraging LLMs for recommendation tasks. This includes using LLMs for generative item retrieval and ranking, and developing versatile LLMs for various recommendation tasks, potentially leading to a paradigm shift in the field of recommender systems. This tutorial aims to demystify the Large Language Model for Recommendation (LLM4Rec) by reviewing its evolution and delving into cutting-edge research. We will explore how LLMs enhance recommender systems in terms of architecture, learning paradigms, and functionalities such as conversational abilities, generalization, planning, and content generation. The tutorial will shed light on the challenges and open problems in this burgeoning field, including trustworthiness, efficiency, online training, and evaluation of LLM4Rec. We will conclude by summarizing key learnings from existing studies and outlining potential avenues for future research, with the goal of equipping the audience with a comprehensive understanding of LLM4Rec and inspiring further exploration in this transformative domain.
引用
收藏
页码:2993 / 2996
页数:4
相关论文
共 59 条
  • [1] Information Retrieval meets Large Language Models: A strategic report from Chinese IR community
    Ai, Qingyao
    Bai, Ting
    Cao, Zhao
    Chang, Yi
    Chen, Jiawei
    Chen, Zhumin
    Cheng, Zhiyong
    Dong, Shoubin
    Dou, Zhicheng
    Feng, Fuli
    Gao, Shen
    Guo, Jiafeng
    He, Xiangnan
    Lan, Yanyan
    Li, Chenliang
    Liu, Yiqun
    Lyu, Ziyu
    Ma, Weizhi
    Ma, Jun
    Ren, Zhaochun
    Ren, Pengjie
    Wang, Zhiqiang
    Wang, Mingwen
    Wen, Ji-Rong
    Wu, Le
    Xin, Xin
    Xu, Jun
    Yin, Dawei
    Zhang, Peng
    Zhang, Fan
    Zhang, Weinan
    Zhang, Min
    Zhu, Xiaofei
    [J]. AI OPEN, 2023, 4 : 80 - 90
  • [2] Bao K., 2023, ARXIV
  • [3] TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation
    Bao, Keqin
    Zhang, Jizhi
    Zhang, Yang
    Wang, Wenjie
    Feng, Fuli
    He, Xiangnan
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 1007 - 1014
  • [4] Bao Keqin, 2023, SIGIR AP
  • [5] Berglund Lukas, 2023, arXiv
  • [6] Bills Steven, 2023, Language models can explain neurons in language models
  • [7] Carranza Aldo Gael, 2023, ARXIV
  • [8] Bias Issues and Solutions in Recommender System
    Chen, Jiawei
    Wang, Xiang
    Feng, Fuli
    He, Xiangnan
    [J]. 15TH ACM CONFERENCE ON RECOMMENDER SYSTEMS (RECSYS 2021), 2021, : 825 - 827
  • [9] Cui ZX, 2022, ARXIV
  • [10] Uncovering ChatGPT's Capabilities in Recommender Systems
    Dai, Sunhao
    Shao, Ninglu
    Zhao, Haiyuan
    Yu, Weijie
    Si, Zihua
    Xu, Chen
    Sun, Zhongxiang
    Zhang, Xiao
    Xu, Jun
    [J]. PROCEEDINGS OF THE 17TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2023, 2023, : 1126 - 1132