Bayesian network as an adaptive parameter setting approach for genetic algorithms

被引:0
作者
Corriveau, Guillaume [1 ]
Guilbault, Raynald [1 ]
Tahan, Antoine [1 ]
Sabourin, Robert [2 ]
机构
[1] EcoleTechnol Super, Dept Mech Engn, 1100 Notre Dame St West, Montreal, PQ H3C 1K3, Canada
[2] Ecole Technol Super, Dept Automated Mfg Engn, 1100 Notre Dame St West, Montreal, PQ H3C 1K3, Canada
关键词
Bayesian network; Exploration/exploitation balance; Genetic algorithms; Parameter adaptation;
D O I
10.1007/s40747-016-0010-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Parameter setting currently ranks among the most actively researched topics in the evolutionary algorithm (EA) community. This can be explained by the major impact EA parameters have on search performance. However, parameter setting has been shown to be both problem dependent and evolution dependent. Moreover, because parameters interact in complex ways, developing an efficient and beneficial parameter setting approach is not an easy feat, and no broadly recognized solution has emerged to date. In this paper, we borrow the notion of parameter adaptation with the objective of addressing the parameter setting dependencies mentioned above, using a strategy based on a Bayesian network. The adaptive framework is elaborated for a steady-state genetic algorithm (SSGA) to control nine parameters. To judge parameter state productivities, we consider the population's fitness improvement, as well as exploration/exploitation balance management. The performance of this proposal, a Bayesian network for genetic algorithm parameter adaptation (BNGA), is assessed based on the CEC'05 benchmark. BNGA is compared to a static parameter setting, a naive approach, three common adaptive systems (PM, AP, and FAUC-RMAB), and two state-of-the-art EAs (CMA-ES and G-CMA-ES). Our results statistically demonstrate that the performance of BNGA is equivalent to that of FAUC-RMAB, CMA-ES, and G-CMA-ES, and overall is superior to that of all the other SSGA parameter setting approaches. However, these results also reveal that all the approaches considered have great difficulty finding global optima in a multimodal problem set. This suggests a lack of complementarity and/or synergy among parameter states.
引用
收藏
页码:1 / 22
页数:22
相关论文
共 52 条
[31]  
Lunacek M, 2006, GECCO 2006: GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, VOL 1 AND 2, P477
[32]   Metaheuristics in large-scale global continues optimization: A survey [J].
Mandavi, Sedigheh ;
Shiri, Mohammad Ebrahim ;
Rahnamayan, Shahryar .
INFORMATION SCIENCES, 2015, 295 :407-428
[33]  
Maturana J, 2008, LECT NOTES COMPUT SC, V5199, P256, DOI 10.1007/978-3-540-87700-4_26
[34]   Calculating the expected loss of diversity of selection schemes [J].
Motoki, T .
EVOLUTIONARY COMPUTATION, 2002, 10 (04) :397-422
[35]  
Ono I., 1997, P 7 ICGA, P246
[36]   A comparative review of approaches to prevent premature convergence in GA [J].
Pandey, Hari Mohan ;
Chaudhary, Ankit ;
Mehrotra, Deepti .
APPLIED SOFT COMPUTING, 2014, 24 :1047-1077
[37]   A model for parameter setting based on Bayesian networks [J].
Pavon, Reyes ;
Diaz, Fernando ;
Luzon, Victoria .
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2008, 21 (01) :14-25
[38]   Automatic parameter tuning with a Bayesian case-based reasoning system. A case of study [J].
Pavon, Reyes ;
Diaz, Fernando ;
Laza, Rosalia ;
Luzon, Victoria .
EXPERT SYSTEMS WITH APPLICATIONS, 2009, 36 (02) :3407-3420
[39]   FUSION, PROPAGATION, AND STRUCTURING IN BELIEF NETWORKS [J].
PEARL, J .
ARTIFICIAL INTELLIGENCE, 1986, 29 (03) :241-288
[40]  
Pearl J., 1988, PROBABILISTIC REASON