Hierarchical Particle Swarm Optimization-incorporated Latent Factor Analysis for Large-Scale Incomplete Matrices

被引:33
作者
Chen, Jia [1 ]
Luo, Xin [2 ,3 ]
Zhou, Mengchu [4 ]
机构
[1] Beihang Univ, Sch Cyber Sci & Technol, Beijing 100191, Peoples R China
[2] Chinese Acad Sci, Chongqing Engn Res Ctr Big Data Applicat Smart Ci, Chongqing Inst Green & Intelligent Technol, Chongqing Key Lab Big Data & Intelligent Comp, Chongqing 400714, Peoples R China
[3] Univ Chinese Acad Sci, Chongqing Sch, Chongqing 400714, Peoples R China
[4] New Jersey Inst Technol, Dept Elect & Comp Engn, Newark, NJ 07102 USA
基金
中国国家自然科学基金;
关键词
Adaptation models; Optimization; Convergence; Computational modeling; Sparse matrices; Particle swarm optimization; Big Data; Big data; latent factor analysis; particle swarm optimization; high-dimensional and sparse matrix; large-scale incomplete data; missing data estimation; industrial application; FACTORIZATION; INFORMATION; ALGORITHM; SELECTION; IMPROVE;
D O I
10.1109/TBDATA.2021.3090905
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
A Stochastic Gradient Descent (SGD)-based Latent Factor Analysis (LFA) model is highly efficient in representative learning on a High-Dimensional and Sparse (HiDS) matrix, where the learning rate adaptation is vital in its efficiency and practicability. The learning rate adaptation of an SGD-based LFA model can be achieved efficiently by learning rate evolution with an evolutionary computing algorithm. However, a resultant model commonly suffers from twofold premature convergence issues, i.e., a) the premature convergence of the learning rate swarm relying on an evolution algorithm, and b) the premature convergence of an LFA model relying on the compound effects of evolution-based learning rate adaptation and adopted optimization algorithm. Aiming at addressed such issues, this work proposes an Hierarchical Particle swarm optimization-incorporated Latent factor analysis (HPL) model with a two-layered structure. The first layer pre-trains desired latent factors with a position-transitional particle swarm optimization-based LFA model with learning rate adaptation; while the second layer performs latent factor refinement with a newly-proposed mini-batch particle swarm optimization algorithm. Experimental results on four HiDS matrices generated by industrial applications demonstrate that an HPL model can well handle the mentioned premature convergence issues, thereby achieving highly-accurate representation to HiDS matrices.
引用
收藏
页码:1524 / 1536
页数:13
相关论文
共 62 条
[11]  
Demsar J, 2006, J MACH LEARN RES, V7, P1
[12]   A Supervised Learning and Control Method to Improve Particle Swarm Optimization Algorithms [J].
Dong, Wenyong ;
Zhou, MengChu .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2017, 47 (07) :1135-1148
[13]  
Duchi J, 2011, J MACH LEARN RES, V12, P2121
[14]   A Novel Embedding Method for Information Diffusion Prediction in Social Network Big Data [J].
Gao, Sheng ;
Pang, Huacan ;
Gallinari, Patrick ;
Guo, Jun ;
Kato, Nei .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2017, 13 (04) :2097-2105
[15]   Probabilistic machine learning and artificial intelligence [J].
Ghahramani, Zoubin .
NATURE, 2015, 521 (7553) :452-459
[16]   Genetic Learning Particle Swarm Optimization [J].
Gong, Yue-Jiao ;
Li, Jing-Jing ;
Zhou, Yicong ;
Li, Yun ;
Chung, Henry Shu-Hung ;
Shi, Yu-Hui ;
Zhang, Jun .
IEEE TRANSACTIONS ON CYBERNETICS, 2016, 46 (10) :2277-2290
[17]   Effective learning rate adjustment of blind source separation based on an improved particle swarm optimizer [J].
Hsieh, Sheng-Ta ;
Sun, Tsung-Ying ;
Lin, Chun-Ling ;
Liu, Chan-Cheng .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2008, 12 (02) :242-251
[18]   An Algorithm of Inductively Identifying Clusters From Attributed Graphs [J].
Hu, Lun ;
Yang, Shicheng ;
Luo, Xin ;
Zhou, MengChu .
IEEE TRANSACTIONS ON BIG DATA, 2022, 8 (02) :523-534
[19]  
Jamali M., 2010, P 4 ACM C REC SYST, P135
[20]   Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm [J].
Jiang, M. ;
Luo, Y. P. ;
Yang, S. Y. .
INFORMATION PROCESSING LETTERS, 2007, 102 (01) :8-16