Parallel Fractional Stochastic Gradient Descent With Adaptive Learning for Recommender Systems

被引:6
作者
Elahi, Fatemeh [1 ]
Fazlali, Mahmood [1 ,2 ]
Malazi, Hadi Tabatabaee [3 ]
Elahi, Mehdi [4 ]
机构
[1] Shahid Beheshti Univ, Fac Math Sci, Dept Comp & Data Sci, Tehran 1983963113, Iran
[2] Univ Hatfield, Sch Phys Engn & Comp Sci, Cybersecur & Comp Syst Res Grp, Hatfield AL10 9AB, Herts, England
[3] Univ Coll Dublin, Sch Comp Sci, Dublin 4, Ireland
[4] Univ Bergen, Dept Informat Sci & Media Studies, N-5007 Bergen, Norway
关键词
Convergence; Recommender systems; Graphics processing units; Standards; Collaborative filtering; Stochastic processes; Sparse matrices; Parallel matrix factorization; recommender systems; collaborative filtering; MATRIX FACTORIZATION; ALGORITHM;
D O I
10.1109/TPDS.2022.3185212
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The structural change toward the digital transformation of online sales elevates the importance of parallel processing techniques in recommender systems, particularly in the pandemic and post-pandemic era. Matrix factorization (MF) is a popular and scalable approach in collaborative filtering (CF) to predict user preferences in recommender systems. Researchers apply Stochastic Gradient Descent (SGD) as one of the most famous optimization techniques for MF. Paralleling SGD methods help address big data challenges due to the wide range of products and the sparsity in user ratings. However, these methods' convergence rate and accuracy are affected by the dependency between the user and item latent factors, specifically in large-scale problems. Besides, the performance is sensitive to the applied learning rates. This article proposes a new parallel method to remove dependencies and boost speed-up by using fractional calculus to improve accuracy and convergence rate. We also apply adaptive learning rates to enhance the performance of our proposed method. The proposed method is based on Compute Unified Device Architecture (CUDA) platform. We evaluate the performance of our proposed method using real-world data and compare the results with the close baselines. The results show that our method can obtain high accuracy and convergence rate in addition to high parallelism.
引用
收藏
页码:470 / 483
页数:14
相关论文
共 42 条
[11]   A social recommender system using item asymmetric correlation [J].
Dakhel, Arghavan Moradi ;
Malazi, Hadi Tabatabaee ;
Mahdavi, Mehregan .
APPLIED INTELLIGENCE, 2018, 48 (03) :527-540
[12]   Federated matrix factorization for privacy-preserving recommender systems [J].
Du, Yongjie ;
Zhou, Deyun ;
Xie, Yu ;
Shi, Jiao ;
Gong, Maoguo .
APPLIED SOFT COMPUTING, 2021, 111
[13]   RBPR: A hybrid model for the new user cold start problem in recommender systems [J].
Feng, Junmei ;
Xia, Zhaoqiang ;
Feng, Xiaoyi ;
Peng, Jinye .
KNOWLEDGE-BASED SYSTEMS, 2021, 214
[14]   Alleviating the new user problem in collaborative filtering by exploiting personality information [J].
Fernandez-Tobias, Ignacio ;
Braunhofer, Matthias ;
Elahi, Mehdi ;
Ricci, Francesco ;
Cantador, Ivan .
USER MODELING AND USER-ADAPTED INTERACTION, 2016, 26 (2-3) :221-255
[15]   Improved recommender systems by denoising ratings in highly sparse datasets through individual rating confidence [J].
Joorabloo, Nima ;
Jalili, Mahdi ;
Ren, Yongli .
INFORMATION SCIENCES, 2022, 601 :242-254
[16]   Synchronization Trade-offs in GPU implementations of Graph Algorithms [J].
Kaleem, Rashid ;
Venkat, Anand ;
Pai, Sreepathi ;
Hall, Mary ;
Pingali, Keshav .
2016 IEEE 30TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2016), 2016, :514-523
[17]   Design of normalized fractional SGD computing paradigm for recommender systems [J].
Khan, Zeshan Aslam ;
Zubair, Syed ;
Chaudhary, Naveed Ishtiaq ;
Raja, Muhammad Asif Zahoor ;
Khan, Farrukh A. ;
Dedovic, Nebojsa .
NEURAL COMPUTING & APPLICATIONS, 2020, 32 (14) :10245-10262
[18]   Fractional stochastic gradient descent for recommender systems [J].
Khan, Zeshan Aslam ;
Chaudhary, Naveed Ishtiaq ;
Zubair, Syed .
ELECTRONIC MARKETS, 2019, 29 (02) :275-285
[19]  
Kim RJ, 2021, INT J FOOD SCI NUTR, V72, P537, DOI [10.1080/09637486.2020.1840533, 10.1109/EMR.2020.2990115]
[20]   AdaError: An Adaptive Learning Rate Method for Matrix Approximation-based Collaborative Filtering [J].
Li, Dongsheng ;
Chen, Chao ;
Lv, Qin ;
Gu, Hansu ;
Lu, Tun ;
Shang, Li ;
Gu, Ning ;
Chu, Stephen M. .
WEB CONFERENCE 2018: PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE (WWW2018), 2018, :741-751