Ensemble learning (EL) is a widely used approach with remarkable achievements in real-world applications. However, the quality of ensemble models constructed through the common practice, such as multiple data divisions or training processes, cannot be guaranteed due to the low accuracy or high complexity of base models. Achieving a balance between prediction accuracy and model complexity is challenging, as these are two conflicting aspects that affect generalization ability in existing works. Inspired by the ability of multi-objective optimization to generate a set of Pareto solutions involving conflicting objectives, an ensemble learning algorithm is proposed based on multi-objective particle swarm optimization (MOPSO) and a chasing method that combines local search with the MOPSO. This algorithm, termed EL-MOPSO, aims to balance diversity, prediction accuracy, and model complexity simultaneously during the training process. Specifically, MOPSO is introduced to EL to generate various Pareto models, which are on the approximate Pareto front (PF) of the MOPSO in terms of accuracy and complexity, to construct a model pool. A chasing method is designed, where solutions in the MOPSO archive and those from the local search method chase each other to imporve the accuracy of the models generated by EL-MOPSO. Additionally, the adaptive reference vector is introduced to select suitable models from the model pool for the local search model ensemble process. Experimental results on 42 test functions demonstrate the superiority of EL-MOPSO in terms of prediction accuracy compared to state-of-the-art methods, with EL-MOPSO achieving the best accuracy in 28 test cases. Furthermore, the proposed method is applied to a realworld material design problem, further evidencing the competence of the EL-MOPSO algorithm. As a result, the ensemble model trained through EL-MOPSO exhibited an average error of 0.021, compared to MSEs of 0.033, 0.041, 0.023, and 0.025 from SWA-based EL, DREML, IETP-EL, and boosting & bagging, respectively.