Hierarchical Particle Swarm Optimization-incorporated Latent Factor Analysis for Large-Scale Incomplete Matrices

被引:33
作者
Chen, Jia [1 ]
Luo, Xin [2 ,3 ]
Zhou, Mengchu [4 ]
机构
[1] Beihang Univ, Sch Cyber Sci & Technol, Beijing 100191, Peoples R China
[2] Chinese Acad Sci, Chongqing Engn Res Ctr Big Data Applicat Smart Ci, Chongqing Inst Green & Intelligent Technol, Chongqing Key Lab Big Data & Intelligent Comp, Chongqing 400714, Peoples R China
[3] Univ Chinese Acad Sci, Chongqing Sch, Chongqing 400714, Peoples R China
[4] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
基金
中国国家自然科学基金;
关键词
Adaptation models; Optimization; Convergence; Computational modeling; Sparse matrices; Particle swarm optimization; Big Data; Big data; latent factor analysis; particle swarm optimization; high-dimensional and sparse matrix; large-scale incomplete data; missing data estimation; industrial application; FACTORIZATION; INFORMATION; ALGORITHM; SELECTION; IMPROVE;
D O I
10.1109/TBDATA.2021.3090905
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A Stochastic Gradient Descent (SGD)-based Latent Factor Analysis (LFA) model is highly efficient in representative learning on a High-Dimensional and Sparse (HiDS) matrix, where the learning rate adaptation is vital in its efficiency and practicability. The learning rate adaptation of an SGD-based LFA model can be achieved efficiently by learning rate evolution with an evolutionary computing algorithm. However, a resultant model commonly suffers from twofold premature convergence issues, i.e., a) the premature convergence of the learning rate swarm relying on an evolution algorithm, and b) the premature convergence of an LFA model relying on the compound effects of evolution-based learning rate adaptation and adopted optimization algorithm. Aiming at addressed such issues, this work proposes an Hierarchical Particle swarm optimization-incorporated Latent factor analysis (HPL) model with a two-layered structure. The first layer pre-trains desired latent factors with a position-transitional particle swarm optimization-based LFA model with learning rate adaptation; while the second layer performs latent factor refinement with a newly-proposed mini-batch particle swarm optimization algorithm. Experimental results on four HiDS matrices generated by industrial applications demonstrate that an HPL model can well handle the mentioned premature convergence issues, thereby achieving highly-accurate representation to HiDS matrices.
引用
收藏
页码:1524 / 1536
页数:13
相关论文
共 62 条
[1]  
[Anonymous], 2012, ARXIV
[2]   Energy-Optimized Partial Computation Offloading in Mobile-Edge Computing With Genetic Simulated-Annealing-Based Particle Swarm Optimization [J].
Bi, Jing ;
Yuan, Haitao ;
Duanmu, Shuaifei ;
Zhou, MengChu ;
Abusorrah, Abdullah .
IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (05) :3774-3785
[3]  
Boyd S., 2004, CONVEX OPTIMIZATION
[4]   Comprehensive Learning Particle Swarm Optimization Algorithm With Local Search for Multimodal Functions [J].
Cao, Yulian ;
Zhang, Han ;
Li, Wenfeng ;
Zhou, Mengchu ;
Zhang, Yu ;
Chaovalitwongse, Wanpracha Art .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2019, 23 (04) :718-731
[5]  
Chen J., 2020, ARXIV
[6]  
Chen JF, 2021, NEUROCOMPUTING, V421, P316
[7]   Particle Swarm Optimization with an Aging Leader and Challengers [J].
Chen, Wei-Neng ;
Zhang, Jun ;
Lin, Ying ;
Chen, Ni ;
Zhan, Zhi-Hui ;
Chung, Henry Shu-Hung ;
Li, Yun ;
Shi, Yu-Hui .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2013, 17 (02) :241-258
[8]   A social learning particle swarm optimization algorithm for scalable optimization [J].
Cheng, Ran ;
Jin, Yaochu .
INFORMATION SCIENCES, 2015, 291 :43-60
[9]   Fast Tensor Factorization for Large-Scale Context-Aware Recommendation from Implicit Feedback [J].
Chou, Szu-Yu ;
Jang, Jyh-Shing Roger ;
Yang, Yi-Hsuan .
IEEE TRANSACTIONS ON BIG DATA, 2020, 6 (01) :201-208
[10]  
Chu Wei, 2009, Artificial Intelligence and Statistics, P89