Adaptive Particle Swarm Optimization

被引:1448
作者
Zhan, Zhi-Hui [1 ]
Zhang, Jun [1 ]
Li, Yun [2 ,3 ]
Chung, Henry Shu-Hung [4 ]
机构
[1] Sun Yat Sen Univ, Dept Comp Sci, Guangzhou 510275, Guangdong, Peoples R China
[2] Univ Glasgow, Dept Elect & Elect Engn, Glasgow G12 8LT, Lanark, Scotland
[3] Univ Elect Sci & Technol China, Chengdu 610054, Peoples R China
[4] City Univ Hong Kong, Dept Elect Engn, Kowloon, Hong Kong, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS | 2009年 / 39卷 / 06期
基金
美国国家科学基金会;
关键词
Adaptive particle swarm optimization (APSO); evolutionary computation; global optimization; particle swarm optimization (PSO); CONVERGENCE; STABILITY; ALGORITHM; TRACKING; OPTIMA;
D O I
10.1109/TSMCB.2009.2015956
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
An adaptive particle swarm optimization (APSO) that features better search efficiency than classical particle swarm optimization (PSO) is presented. More importantly, it can perform a global search over the entire search space with faster convergence speed. The APSO consists of two main steps. First, by evaluating the population distribution and particle fitness, a real-time evolutionary state estimation procedure is performed to identify one of the following four defined evolutionary states, including exploration, exploitation, convergence, and jumping out in each generation. It enables the automatic control of inertia weight, acceleration coefficients, and other algorithmic parameters at run time to improve the search efficiency and convergence speed. Then, an elitist learning strategy is performed when the evolutionary state is classified as convergence state. The strategy will act on the globally best particle to jump out of the likely local optima. The APSO has comprehensively been evaluated on 12 unimodal and multimodal benchmark functions. The effects of parameter adaptation and elitist learning will be studied. Results show that APSO substantially enhances the performance of the PSO paradigm in terms of convergence speed, global optimality, solution accuracy, and algorithm reliability. As APSO introduces two new parameters to the PSO paradigm only, it does not introduce an additional design or implementation complexity.
引用
收藏
页码:1362 / 1381
页数:20
相关论文
共 58 条
  • [1] Andrews PS, 2006, IEEE C EVOL COMPUTAT, P1029
  • [2] Using selection to improve particle swarm optimization
    Angeline, PJ
    [J]. 1998 IEEE INTERNATIONAL CONFERENCE ON EVOLUTIONARY COMPUTATION - PROCEEDINGS, 1998, : 84 - 89
  • [3] [Anonymous], P GEN EV COMP C
  • [4] Box G.E.P., 2005, Statistics for Experimenters: Design, Innovation, and Discovery
  • [5] Locating multiple optima using particle swarm optimization
    Brits, R.
    Engelbrecht, A. P.
    van den Bergh, F.
    [J]. APPLIED MATHEMATICS AND COMPUTATION, 2007, 189 (02) : 1859 - 1883
  • [6] Brits R., 2002, P C SIM EV LEARN, P692
  • [7] Carlisle A, 2000, IC-AI'2000: PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 1-III, P429
  • [8] Carlisle A., 2001, Proceedings of the Particle Swarm Optimization Workshop, P1
  • [9] Nonlinear inertia weight variation for dynamic adaptation in particle swarm optimization
    Chatterjee, A
    Siarry, P
    [J]. COMPUTERS & OPERATIONS RESEARCH, 2006, 33 (03) : 859 - 871
  • [10] Particle swarm optimization with recombination and dynamic linkage discovery
    Chen, Ying-Ping
    Peng, Wen-Chih
    Jian, Ming-Chung
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2007, 37 (06): : 1460 - 1470