A reinforcement learning recommender system using bi-clustering and Markov Decision Process

被引:14
作者
Iftikhar, Arta [1 ]
Ghazanfar, Mustansar Ali [2 ]
Ayub, Mubbashir [1 ]
Alahmari, Saad Ali [3 ]
Qazi, Nadeem [2 ]
Wall, Julie [2 ]
机构
[1] Univ Engn & Technol, Dept Software Engn, Taxila, Pakistan
[2] Univ East London, Dept Comp Sci & Digital Technol, London, England
[3] AL Imam Mohammad Ibn Saud Islamic Univ, Dept Comp Sci, Riyadh, Saudi Arabia
关键词
Reinforcement learning; Markov Decision Process; Bi-clustering; Q-learning; Policy; ALGORITHM; PERSONALIZATION; ACCURACY; IMPROVE;
D O I
10.1016/j.eswa.2023.121541
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Collaborative filtering (CF) recommender systems are static in nature and does not adapt well with changing user preferences. User preferences may change after interaction with a system or after buying a product. Conventional CF clustering algorithms only identifies the distribution of patterns and hidden correlations globally. However, the impossibility of discovering local patterns by these algorithms, headed to the popularization of bi-clustering algorithms. Bi-clustering algorithms can analyze all dataset dimensions simultaneously and consequently, discover local patterns that deliver a better understanding of the underlying hidden correlations. In this paper, we modelled the recommendation problem as a sequential decision-making problem using Markov Decision Processes (MDP). To perform state representation for MDP, we first converted user-item votings matrix to a binary matrix. Then we performed bi-clustering on this binary matrix to determine a subset of similar rows and columns. A bi-cluster merging algorithm is designed to merge similar and overlapping bi-clusters. These biclusters are then mapped to a squared grid (SG). RL is applied on this SG to determine best policy to give recommendation to users. Start state is determined using Improved Triangle Similarity (ITR similarity measure. Reward function is computed as grid state overlapping in terms of users and items in current and prospective next state. A thorough comparative analysis was conducted, encompassing a diverse array of methodologies, including RL-based, pure Collaborative Filtering (CF), and clustering methods. The results demonstrate that our proposed method outperforms its competitors in terms of precision, recall, and optimal policy learning.
引用
收藏
页数:18
相关论文
共 50 条
[41]   FedSlate: A Federated Deep Reinforcement Learning Recommender System [J].
Deng, Yongxin ;
Qiu, Xihe ;
Tan, Xiaoyu ;
Jin, Yaochu .
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2025,
[42]   A Reinforcement Learning-Based Markov-Decision Process (MDP) Implementation for SRAM FPGAs [J].
Ruan, Aiwu ;
Shi, Aokai ;
Qin, Liang ;
Xu, Shiyang ;
Zhao, Yifan .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2020, 67 (10) :2124-2128
[43]   A reinforcement learning method based on an immune network adapted to a semi-Markov decision process [J].
Kogawa N. ;
Obayashi M. ;
Kobayashi K. ;
Kuremoto T. .
Artificial Life and Robotics, 2009, 13 (2) :538-542
[44]   GenSafe: A Generalizable Safety Enhancer for Safe Reinforcement Learning Algorithms Based on Reduced Order Markov Decision Process Model [J].
Zhou, Zhehua ;
Xie, Xuan ;
Song, Jiayang ;
Shu, Zhan ;
Ma, Lei .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (06) :10678-10692
[45]   On the convergence of projective-simulation-based reinforcement learning in Markov decision processes [J].
Boyajian, W. L. ;
Clausen, J. ;
Trenkwalder, L. M. ;
Dunjko, V ;
Briegel, H. J. .
QUANTUM MACHINE INTELLIGENCE, 2020, 2 (02)
[46]   On the convergence of projective-simulation–based reinforcement learning in Markov decision processes [J].
W. L. Boyajian ;
J. Clausen ;
L. M. Trenkwalder ;
V. Dunjko ;
H. J. Briegel .
Quantum Machine Intelligence, 2020, 2
[47]   A Hybrid Recommender System Using KNN and Clustering [J].
Fan, Hao ;
Wu, Kaijun ;
Parvin, Hamid ;
Beigi, Akram ;
Pho, Kim-Hung .
INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY & DECISION MAKING, 2021, 20 (02) :553-596
[48]   Optimizing Workflow Task Clustering Using Reinforcement Learning [J].
Leong, Chin Poh ;
Liew, Chee Sun ;
Chan, Chee Seng ;
Rehman, Muhammad Habib Ur .
IEEE ACCESS, 2021, 9 :110614-110626
[49]   Solving semi-Markov decision problems using average reward reinforcement learning [J].
Das, TK ;
Gosavi, A ;
Mahadevan, S ;
Marchalleck, N .
MANAGEMENT SCIENCE, 1999, 45 (04) :560-574
[50]   From Perturbation Analysis to Markov Decision Processes and Reinforcement Learning [J].
Xi-Ren Cao .
Discrete Event Dynamic Systems, 2003, 13 :9-39