Ensemble learning training strategy based on multi-objective particle swarm optimization and chasing method

被引:0
作者
Li, Xinyue [1 ]
Zhang, Yu [1 ]
Hu, Wang [1 ]
机构
[1] Univ Elect Sci & Technol China, Sch Comp Sci & Engn, Chengdu 611731, Peoples R China
基金
中国国家自然科学基金;
关键词
Ensemble learning; Evolutionary algorithm; Multi-objective particle swarm optimization; Local search; NONDOMINATED SORTING APPROACH; EVOLUTIONARY ALGORITHMS;
D O I
10.1016/j.eswa.2025.127777
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Ensemble learning (EL) is a widely used approach with remarkable achievements in real-world applications. However, the quality of ensemble models constructed through the common practice, such as multiple data divisions or training processes, cannot be guaranteed due to the low accuracy or high complexity of base models. Achieving a balance between prediction accuracy and model complexity is challenging, as these are two conflicting aspects that affect generalization ability in existing works. Inspired by the ability of multi-objective optimization to generate a set of Pareto solutions involving conflicting objectives, an ensemble learning algorithm is proposed based on multi-objective particle swarm optimization (MOPSO) and a chasing method that combines local search with the MOPSO. This algorithm, termed EL-MOPSO, aims to balance diversity, prediction accuracy, and model complexity simultaneously during the training process. Specifically, MOPSO is introduced to EL to generate various Pareto models, which are on the approximate Pareto front (PF) of the MOPSO in terms of accuracy and complexity, to construct a model pool. A chasing method is designed, where solutions in the MOPSO archive and those from the local search method chase each other to imporve the accuracy of the models generated by EL-MOPSO. Additionally, the adaptive reference vector is introduced to select suitable models from the model pool for the local search model ensemble process. Experimental results on 42 test functions demonstrate the superiority of EL-MOPSO in terms of prediction accuracy compared to state-of-the-art methods, with EL-MOPSO achieving the best accuracy in 28 test cases. Furthermore, the proposed method is applied to a realworld material design problem, further evidencing the competence of the EL-MOPSO algorithm. As a result, the ensemble model trained through EL-MOPSO exhibited an average error of 0.021, compared to MSEs of 0.033, 0.041, 0.023, and 0.025 from SWA-based EL, DREML, IETP-EL, and boosting & bagging, respectively.
引用
收藏
页数:16
相关论文
共 68 条
  • [51] MultiBoosting: A technique for combining boosting and wagging
    Webb, GI
    [J]. MACHINE LEARNING, 2000, 40 (02) : 159 - 196
  • [52] Combination of multiple classifiers using local accuracy estimates
    Woods, K
    Kegelmeyer, WP
    Bowyer, K
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1997, 19 (04) : 405 - 410
  • [53] Machine Learning for Materials Research and Development
    Xie Jianxin
    Su Yanjing
    Xue Dezhen
    Jiang Xue
    Fu Huadong
    Huang Haiyou
    [J]. ACTA METALLURGICA SINICA, 2021, 57 (11) : 1343 - 1361
  • [54] Xuan H., 2018, 15 EUR C COMP VIS EC
  • [55] Effective Neural Network Ensemble Approach for Improving Generalization Performance
    Yang, Jing
    Zeng, Xiaoqin
    Zhong, Shuiming
    Wu, Shengli
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2013, 24 (06) : 878 - 887
  • [56] A new smoothing Newton method for solving constrained nonlinear equations
    Yang, Liu
    Chen, Yanping
    Tong, Xiaojiao
    Deng, Chunlin
    [J]. APPLIED MATHEMATICS AND COMPUTATION, 2011, 217 (24) : 9855 - 9863
  • [57] Identifying Protein-Kinase-Specific Phosphorylation Sites Based on the Bagging-AdaBoost Ensemble Approach
    Yu, Zhiwen
    Deng, Zhongkai
    Wong, Hau-San
    Tan, Lirong
    [J]. IEEE TRANSACTIONS ON NANOBIOSCIENCE, 2010, 9 (02) : 132 - 143
  • [58] Imbalance learning using heterogeneous ensembles
    Zefrehi, Hossein Ghaderi
    Altincay, Hakan
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2020, 142
  • [59] A survey on evolutionary computation for complex continuous optimization
    Zhan, Zhi-Hui
    Shi, Lin
    Tan, Kay Chen
    Zhang, Jun
    [J]. ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (01) : 59 - 110
  • [60] MOEA/D: A multiobjective evolutionary algorithm based on decomposition
    Zhang, Qingfu
    Li, Hui
    [J]. IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2007, 11 (06) : 712 - 731