Parallel Fractional Stochastic Gradient Descent With Adaptive Learning for Recommender Systems

被引:6
作者
Elahi, Fatemeh [1 ]
Fazlali, Mahmood [1 ,2 ]
Malazi, Hadi Tabatabaee [3 ]
Elahi, Mehdi [4 ]
机构
[1] Shahid Beheshti Univ, Fac Math Sci, Dept Comp & Data Sci, Tehran 1983963113, Iran
[2] Univ Hatfield, Sch Phys Engn & Comp Sci, Cybersecur & Comp Syst Res Grp, Hatfield AL10 9AB, Herts, England
[3] Univ Coll Dublin, Sch Comp Sci, Dublin 4, Ireland
[4] Univ Bergen, Dept Informat Sci & Media Studies, N-5007 Bergen, Norway
关键词
Convergence; Recommender systems; Graphics processing units; Standards; Collaborative filtering; Stochastic processes; Sparse matrices; Parallel matrix factorization; recommender systems; collaborative filtering; MATRIX FACTORIZATION; ALGORITHM;
D O I
10.1109/TPDS.2022.3185212
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The structural change toward the digital transformation of online sales elevates the importance of parallel processing techniques in recommender systems, particularly in the pandemic and post-pandemic era. Matrix factorization (MF) is a popular and scalable approach in collaborative filtering (CF) to predict user preferences in recommender systems. Researchers apply Stochastic Gradient Descent (SGD) as one of the most famous optimization techniques for MF. Paralleling SGD methods help address big data challenges due to the wide range of products and the sparsity in user ratings. However, these methods' convergence rate and accuracy are affected by the dependency between the user and item latent factors, specifically in large-scale problems. Besides, the performance is sensitive to the applied learning rates. This article proposes a new parallel method to remove dependencies and boost speed-up by using fractional calculus to improve accuracy and convergence rate. We also apply adaptive learning rates to enhance the performance of our proposed method. The proposed method is based on Compute Unified Device Architecture (CUDA) platform. We evaluate the performance of our proposed method using real-world data and compare the results with the close baselines. The results show that our method can obtain high accuracy and convergence rate in addition to high parallelism.
引用
收藏
页码:470 / 483
页数:14
相关论文
共 42 条
  • [1] Alleviating data sparsity problem in time-aware recommender systems using a reliable rating profile enrichment approach
    Ahmadian, Sajad
    Joorabloo, Nima
    Jalili, Mahdi
    Ahmadian, Milad
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2022, 187
  • [3] Matrix factorization of large scale data using multistage matrix factorization
    Bhavana, Prasad
    Padmanabhan, Vineet
    [J]. APPLIED INTELLIGENCE, 2021, 51 (06) : 4016 - 4028
  • [4] Design of sign fractional optimization paradigms for parameter estimation of nonlinear Hammerstein systems
    Chaudhary, Naveed Ishtiaq
    Aslam, Muhammad Saeed
    Baleanu, Dumitru
    Raja, Muhammad Asif Zahoor
    [J]. NEURAL COMPUTING & APPLICATIONS, 2020, 32 (12) : 8381 - 8399
  • [5] Normalized fractional adaptive methods for nonlinear control autoregressive systems
    Chaudhary, Naveed Ishtiaq
    Khan, Zeshan Aslam
    Zubair, Syed
    Raja, Muhammad Asif Zahoor
    Dedovic, Nebojsa
    [J]. APPLIED MATHEMATICAL MODELLING, 2019, 66 : 457 - 471
  • [6] BALS: Blocked Alternating Least Squares for Parallel Sparse Matrix Factorization on GPUs
    Chen, Jing
    Fang, Jianbin
    Liu, Weifeng
    Yang, Canqun
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2021, 32 (09) : 2291 - 2302
  • [7] clMF: A fine-grained and portable alternating least squares algorithm for parallel matrix factorization
    Chen, Jing
    Fang, Jianbin
    Liu, Weifeng
    Tang, Tao
    Yang, Canqun
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 108 : 1192 - 1205
  • [8] Accelerating Matrix Factorization by Overparameterization
    Chen, Pu
    Chen, Hung-Hsuan
    [J]. PROCEEDINGS OF THE 1ST INTERNATIONAL CONFERENCE ON DEEP LEARNING THEORY AND APPLICATIONS (DELTA), 2020, : 89 - 97
  • [9] A Learning-Rate Schedule for Stochastic Gradient Methods to Matrix Factorization
    Chin, Wei-Sheng
    Zhuang, Yong
    Juan, Yu-Chin
    Lin, Chih-Jen
    [J]. ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PART I, 2015, 9077 : 442 - 455
  • [10] A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems
    Chin, Wei-Sheng
    Zhuang, Yong
    Juan, Yu-Chin
    Lin, Chih-Jen
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2015, 6 (01)