ACDB-EA: Adaptive convergence-diversity balanced evolutionary algorithm for many-objective optimization

被引:21
作者
Zhou, Yu [1 ]
Li, Sheng [2 ]
Pedrycz, Witold [3 ]
Feng, Guorui [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
[2] Fudan Univ, Sch Comp Sci, Shanghai 200438, Peoples R China
[3] Univ Alberta, Dept Elect & Comp Engn, Edmonton, AB T6R 2V4, Canada
基金
中国国家自然科学基金;
关键词
Many-objective optimization; Evolutionary algorithm; Convergence-diversity balance; Adaptive weights; NONDOMINATED SORTING APPROACH; MULTIOBJECTIVE OPTIMIZATION; NSGA-III; SELECTION; DECOMPOSITION; 2-ARCHIVE; DESIGN; MOEA/D;
D O I
10.1016/j.swevo.2022.101145
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, evolutionary algorithms (EAs) have shown their strong competitiveness in handling many-objective optimization problems (MaOPs) with different Pareto fronts (PFs). However, maintaining convergence and diversity simultaneously in high-dimensional problems can be further explored. This paper suggests an adaptive convergence-diversity balanced evolutionary algorithm (ACDB-EA) to handle the above issue, which maintains balance between convergence and diversity adaptively during the evolutionary process. In the proposed algorithm, a novel diversity maintenance mechanism based on the global and local diversities is developed to promote the diversity by considering them collaboratively. To be specific, the average similarity and the maximal similarity represent respectively the global and local diversities of the solution, where the similarity between two solutions is defined as the cosine similarity between their objective vectors. In the environmental selection, the proposed adaptive convergence-diversity balanced strategy is used to adjust weights of convergence (defined as the L2 norm in the objective space), global diversity and local diversity according to the population adaptively. Under this strategy, each solution produces a score and the solution with the highest score will enter the next generation, which means it acquires the optimal performance in terms of convergence and diversity. In each iteration, scores of candidate solutions will be recalculated to continuously search for the most suitable one, which strengthens the selection pressure toward the true PFs. We conduct experimental study on 111 benchmark testing instances with 2-20 objectives. The proposed method is shown to be superior to seven state-of-the-art algorithms in maintaining balance between convergence and diversity.
引用
收藏
页数:16
相关论文
共 85 条
[1]   HypE: An Algorithm for Fast Hypervolume-Based Many-Objective Optimization [J].
Bader, Johannes ;
Zitzler, Eckart .
EVOLUTIONARY COMPUTATION, 2011, 19 (01) :45-76
[2]   SMS-EMOA: Multiobjective selection based on dominated hypervolume [J].
Beume, Nicola ;
Naujoks, Boris ;
Emmerich, Michael .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 2007, 181 (03) :1653-1669
[3]  
Biesinger B, 2013, LECT NOTES COMPUT SC, V8111, P380, DOI 10.1007/978-3-642-53856-8_48
[4]   On the Properties of the R2 Indicator [J].
Brockhoff, Dimo ;
Wagner, Tobias ;
Trautmann, Heike .
PROCEEDINGS OF THE FOURTEENTH INTERNATIONAL CONFERENCE ON GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, 2012, :465-472
[5]   Improving hypervolume-based multiobjective evolutionary algorithms by using objective reduction methods [J].
Brockhoff, Dimo ;
Zitzler, Eckart .
2007 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION, VOLS 1-10, PROCEEDINGS, 2007, :2086-2093
[6]   Applying graph-based differential grouping for multiobjective large-scale optimization [J].
Cao, Bin ;
Zhao, Jianwei ;
Gu, Yu ;
Ling, Yingbiao ;
Ma, Xiaoliang .
SWARM AND EVOLUTIONARY COMPUTATION, 2020, 53 (53)
[7]  
Carbonell J., 1998, Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, P335, DOI 10.1145/290941.291025
[8]   Hyperplane Assisted Evolutionary Algorithm for Many-Objective Optimization Problems [J].
Chen, Huangke ;
Tian, Ye ;
Pedrycz, Witold ;
Wu, Guohua ;
Wang, Rui ;
Wang, Ling .
IEEE TRANSACTIONS ON CYBERNETICS, 2020, 50 (07) :3367-3380
[9]  
Chen LM, 2018, ADV NEUR IN, V31
[10]   A new gradient stochastic ranking-based multi-indicator algorithm for many-objective optimization [J].
Chen, Ye ;
Yuan, Xiaoping ;
Cang, Xiaohui .
SOFT COMPUTING, 2019, 23 (21) :10911-10929