A Soft Actor-Critic Algorithm for Sequential Recommendation

被引:0
作者
Hong, Hyejin [1 ]
Kimurn, Yusuke [1 ]
Hatano, Kenji [1 ]
机构
[1] Doshisha Univ, Kyoto, Japan
来源
DATABASE AND EXPERT SYSTEMS APPLICATIONS, PT I, DEXA 2024 | 2024年 / 14910卷
关键词
Sequential recommendation; Reinforcement learning; Recommender system; Multi-task learning;
D O I
10.1007/978-3-031-68309-1_22
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Recently, it has become common knowledge that using reinforcement learning for a sequential recommendation, which predicts a user's next action, can improve recommendation performance. This is because reinforcement learning can be used to efficiently learn behavioral changes, which can help you better understand user behavior patterns. Previous research has attempted to incorporate dynamic user characteristics through Actor-Critic algorithms, but these methods are limited in their ability to adequately learn user behavior because they learn without distinguishing between past and present behavior. Therefore, in this study, we propose a framework that incorporates the SAC algorithm, a reinforcement learning technique, to identify correlations between users and items in a dynamic environment where the recommender system continuously receives the next time series of data. Our framework outperformed from the viewpoint of the accuracy in the recommender system compared with the existing methods, and we could confirm that the SAC algorithm has the potential to improve the quality of the sequential recommendations in capturing the temporal dynamics of user interactions.
引用
收藏
页码:258 / 266
页数:9
相关论文
共 18 条
  • [1] Reinforcement Learning based Recommender Systems: A Survey
    Afsar, M. Mehdi
    Crump, Trafford
    Far, Behrouz
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (07)
  • [2] Bangari Sindhuja, 2021, Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), P260, DOI 10.1109/ICAIS50930.2021.9395812
  • [3] Barakat A, 2022, PR MACH LEARN RES, V151
  • [4] Fujimoto S, 2018, PR MACH LEARN RES, V80
  • [5] Haarnoja T, 2018, PR MACH LEARN RES, V80
  • [6] Kumar A., 2012, P 29 INT C INT C MAC, P1723
  • [7] Lin X., 2019, ADV NEURAL INFORM PR, P12037, DOI DOI 10.1007/S10479-022-05025-3
  • [8] A Survey on Reinforcement Learning for Recommender Systems
    Lin, Yuanguo
    Liu, Yong
    Lin, Fan
    Zou, Lixin
    Wu, Pengcheng
    Zeng, Wenhua
    Chen, Huanhuan
    Miao, Chunyan
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) : 13164 - 13184
  • [9] Liu Ziru, 2023, WWW '23: Proceedings of the ACM Web Conference 2023, P1273, DOI 10.1145/3543507.3583467
  • [10] Liu Z., 2021, arXiv