GraphDRL: GNN-based deep reinforcement learning for interactive recommendation with sparse data

被引:0
作者
Li, Wenxin [1 ]
Song, Xiao [1 ]
Tu, Yuchun [1 ]
机构
[1] Beihang Univ BUAA, Sch Cyber Sci & Technol, Beijing, Peoples R China
基金
北京市自然科学基金;
关键词
Graph Neural Networks; Deep Reinforcement Learning; Interactive Recommendation; Sparse data; Dynamic candidate action;
D O I
10.1016/j.eswa.2025.126832
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Interactive recommendation (IR) continuously optimizes performance through sustained interactions between users and the system, thereby capturing dynamic changes in user interests more effectively. Due to the advantages of deep reinforcement learning (DRL) in dynamic optimization and decision-making, researchers have integrated DRL models into interactive recommendations. However, the interactive recommendation still faces the problem of data sparsity, and DRL-based recommendation algorithms often suffer from efficiency issues when handling large-scale discrete action spaces. To address these problems, this paper proposes a GNN-based deep reinforcement learning model, GraphDRL. Specifically, we utilize Graph Neural Networks (GNNs) to obtain embedding representations that effectively model the intricate interactions between users and items, alleviating the data sparsity problem. On this basis, we construct a deep reinforcement learning model with a temporal multi-head attention method to capture users' evolving preferences. Moreover, we propose a dynamic candidate action generation method based on item popularity and embedding representations, which not only more accurately identifies items of interest to users but also reduces the action space, thereby improving recommendation accuracy and efficiency. The superior performance of our algorithm is confirmed through experiments on three public benchmark recommendation datasets and a real-world buyer-supplier interaction dataset.
引用
收藏
页数:12
相关论文
共 44 条
  • [1] Novel predictive model to improve the accuracy of collaborative filtering recommender systems
    Alhijawi, Bushra
    Al-Naymat, Ghazi
    Obeid, Nadim
    Awajan, Arafat
    [J]. INFORMATION SYSTEMS, 2021, 96
  • [2] Deep Reinforcement Learning A brief survey
    Arulkumaran, Kai
    Deisenroth, Marc Peter
    Brundage, Miles
    Bharath, Anil Anthony
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2017, 34 (06) : 26 - 38
  • [3] Bellogin A., 2012, P 6 ACM C REC SYST R, P213, DOI DOI 10.1145/2365952.2365997
  • [4] Efficient Neural Matrix Factorization without Sampling for Recommendation
    Chen, Chong
    Min, Zhang
    Zhang, Yongfeng
    Liu, Yiqun
    Ma, Shaoping
    [J]. ACM TRANSACTIONS ON INFORMATION SYSTEMS, 2020, 38 (02)
  • [5] Chen HK, 2019, AAAI CONF ARTIF INTE, P3312
  • [6] Trip Reinforcement Recommendation with Graph-based Representation Learning
    Chen, Lei
    Cao, Jie
    Tao, Haicheng
    Wu, Jia
    [J]. ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2023, 17 (04)
  • [7] Deep reinforcement learning in recommender systems: A survey and new perspectives
    Chen, Xiaocong
    Yao, Lina
    McAuley, Julian
    Zhou, Guanglin
    Wang, Xianzhi
    [J]. KNOWLEDGE-BASED SYSTEMS, 2023, 264
  • [8] Dulac-Arnold G, 2016, Arxiv, DOI arXiv:1512.07679
  • [9] HN-GCCF: High-order neighbor-enhanced graph convolutional collaborative filtering
    Gong, Kaiqi
    Song, Xiao
    Li, Wenxin
    Wang, Senzhang
    [J]. KNOWLEDGE-BASED SYSTEMS, 2024, 283
  • [10] ITSM-GCN: Informative Training Sample Mining for Graph Convolutional Network-based Collaborative Filtering
    Gong, Kaiqi
    Song, Xiao
    Wang, Senzhang
    Liu, Songsong
    Li, Yong
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 614 - 623