A cautious PSO with conditional random

被引:26
作者
Chan, Chien-Lung [1 ,2 ]
Chen, Chia-Li [1 ,3 ]
机构
[1] Yuan Ze Univ, Dept Informat Management, Taoyuan, Taiwan
[2] Yuan Ze Univ, Innovat Ctr Big Data & Digital Convergence, Taoyuan, Taiwan
[3] Lung Hwa Univ, Dept Informat Management, Guishan Shiang 33306, Taoyuan Cty, Taiwan
关键词
Particle swarm optimization; Conditional random; PARTICLE SWARM OPTIMIZATION; SELECTION; HYBRID;
D O I
10.1016/j.eswa.2014.12.046
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Particle swarm optimization (PSO) simulates social behavior, such as birds flocking or fish schooling. It is an emerging population-based meta-heuristic and a population-based evolutionary algorithm that is used to achieve precise objectives in a multidimensional space. A population (called a swarm) is made up of individuals (called particles) that are updated by iteration. Each particle uses its own best previous experience (pBest) and the best experience (gBest) of all other members to correctly search for a direction. The two important factors are pBest and gBest. A PSO algorithm has four advantages: fewer parameters must be adjusted, it is easy to understand, easy to implement and it is computationally efficient. But it also easily becomes trapped in local optima and the convergence rate is significantly decreased in the latter period of evolutionary processing. The algorithm can easily fall into a local optimum early, because a random value is used to influence the weights of pBest and gBest. This study proposes that, given a prescribed probability that is adjusted randomly, the algorithm is prevented from falling into the local optimum early and the convergence speed and search performance are both improved. This method is compared with a standard PSO algorithm that uses eight standard testing benchmark functions, thirty times. The results show that convergence to an accurate solution is faster for 6 functions (the Sphere, the Rosenbrock, the Rastrigin, the Ackley, the Griewank and the Salomon function). (C) 2015 Elsevier Ltd. All rights reserved.
引用
收藏
页码:4120 / 4125
页数:6
相关论文
共 21 条
[1]   Gene selection in cancer classification using PSO/SVM and GA/SVM hybrid algorithms [J].
Alba, Enrique ;
Garcia-Nieto, Jose ;
Jourdan, Laetitia ;
Talbi, El-Ghazali .
2007 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-10, PROCEEDINGS, 2007, :284-+
[2]  
[Anonymous], 1999, P 1999 C EV COMP WAS
[3]   Defining a standard for particle swarm optimization [J].
Bratton, Daniel ;
Kennedy, James .
2007 IEEE SWARM INTELLIGENCE SYMPOSIUM, 2007, :120-+
[4]   Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization [J].
Chatterjee, A ;
Siarry, P .
COMPUTERS & OPERATIONS RESEARCH, 2006, 33 (03) :859-871
[5]   Solving University Course Timetabling Problems Using Constriction Particle Swarm Optimization with Local Search [J].
Chen, Ruey-Maw ;
Shih, Hsiao-Fang .
ALGORITHMS, 2013, 6 (02) :227-244
[6]  
Clerc M., 1999, P 1999 C EV COMP WAS
[7]  
Dieterich J. M., 2012, Applied Mathematics, V03, P1552
[8]  
Eberhart R., 1995, MHS 95, P39, DOI [DOI 10.1109/MHS.1995.494215, 10.1109/MHS.1995.494215]
[9]  
Eberhart RC, 2000, IEEE C EVOL COMPUTAT, P84, DOI 10.1109/CEC.2000.870279
[10]   A self-guided Particle Swarm Optimization with Independent Dynamic Inertia Weights Setting on Each Particle [J].
Geng, Huantong ;
Huang, Yanhong ;
Gao, Jun ;
Zhu, Haifeng .
APPLIED MATHEMATICS & INFORMATION SCIENCES, 2013, 7 (02) :545-552