Opposition-based particle swarm optimization with adaptive mutation strategy

被引:61
作者
Dong, Wenyong [1 ]
Kang, Lanlan [1 ,2 ]
Zhang, Wensheng [3 ]
机构
[1] Wuhan Univ, Comp Sch, Wuhan 430072, Hubei, Peoples R China
[2] Jiangxi Univ Sci & Technol, Sch Apply Sci, Ganzhou 341000, Peoples R China
[3] Chinese Acad Sci, State Key Lab Intelligent Control & Management Co, Inst Automat, Beijing 100190, Peoples R China
基金
中国国家自然科学基金;
关键词
Particle swarm optimization; Adaptive mutation; Generalized opposition-based learning; Adaptive nonlinear inertia weight;
D O I
10.1007/s00500-016-2102-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To solve the problem of premature convergence in traditional particle swarm optimization (PSO), an opposition-based particle swarm optimization with adaptive mutation strategy (AMOPSO) is proposed in this paper. In all the variants of PSO, the generalized opposition-based PSO (GOPSO), which introduces the generalized opposition-based learning (GOBL), is a prominent one. However, GOPSO may increase probability of being trapped into local optimum. Thus we introduce two complementary strategies to improve the performance of GOPSO: (1) a kind of adaptive mutation selection strategy (AMS) is used to strengthen its exploratory ability, and (2) an adaptive nonlinear inertia weight (ANIW) is introduced to enhance its exploitative ability. The rational principles are as follows: (1) AMS aims to perform local search around the global optimal particle in current population by adaptive disturbed mutation, so it can be beneficial to improve its exploratory ability and accelerate its convergence speed; (2) because it makes the PSO become rigid to keep fixed constant for the inertia weight, ANIW is used to adaptively tune the inertia weight to balance the contradiction between exploration and exploitation during its iteration process. Compared with several opposition-based PSOs on 14 benchmark functions, the experimental results show that the performance of the proposed AMOPSO algorithm is better or competitive to compared algorithms referred in this paper.
引用
收藏
页码:5081 / 5090
页数:10
相关论文
共 21 条
[1]  
[Anonymous], 2011, P 13 ANN C COMP GEN
[2]  
Eberhart R., 2002, MHS95 P 6 INT S MICR, DOI [DOI 10.1109/MHS.1995.494215, 10.1109/MHS.1995.494215]
[3]   From evolutionary computation to the evolution of things [J].
Eiben, Agoston E. ;
Smith, Jim .
NATURE, 2015, 521 (7553) :476-482
[4]  
Gong C, 2012, PROFICIENT OPTIMIZAT
[5]   An Adaptive Particle Swarm Optimization With Multiple Adaptive Methods [J].
Hu, Mengqi ;
Wu, Teresa ;
Weir, Jeffery D. .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2013, 17 (05) :705-720
[6]  
Ismail Adiel, 2012, Simulated Evolution and Learning. 9th International Conference, SEAL 2012. Proceedings, P228, DOI 10.1007/978-3-642-34859-4_23
[7]   Parameter Control in Evolutionary Algorithms: Trends and Challenges [J].
Karafotias, Giorgos ;
Hoogendoorn, Mark ;
Eiben, A. E. .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2015, 19 (02) :167-187
[8]  
Kennedy J, 1995, 1995 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS PROCEEDINGS, VOLS 1-6, P1942, DOI 10.1109/icnn.1995.488968
[9]  
Ozcan E., 1999, Proceedings of the 1999 Congress on Evolutionary Computation-CEC99 (Cat. No. 99TH8406), P1939, DOI 10.1109/CEC.1999.785510
[10]   A New Particle Swarm Optimization Method Enhanced With a Periodic Mutation Strategy and Neural Networks [J].
Pehlivanoglu, Y. Volkan .
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2013, 17 (03) :436-452