On the Global Convergence of Particle Swarm Optimization Methods

被引:0
作者
Hui Huang
Jinniao Qiu
Konstantin Riedl
机构
[1] University of Graz,Institute of Mathematics and Scientific Computing
[2] University of Calgary,Department of Mathematics and Statistics
[3] Technical University of Munich,Department of Mathematics, School of Computation, Information and Technology
[4] Munich Center for Machine Learning,undefined
来源
Applied Mathematics & Optimization | 2023年 / 88卷
关键词
Global derivative-free optimization; High-dimensional nonconvex optimization; Metaheuristics; Particle swarm optimization; Mean-field limit; Vlasov-Fokker-Planck equations; 65C35; 65K10; 90C26; 90C56; 35Q90; 35Q83;
D O I
暂无
中图分类号
学科分类号
摘要
In this paper we provide a rigorous convergence analysis for the renowned particle swarm optimization method by using tools from stochastic calculus and the analysis of partial differential equations. Based on a continuous-time formulation of the particle dynamics as a system of stochastic differential equations, we establish convergence to a global minimizer of a possibly nonconvex and nonsmooth objective function in two steps. First, we prove consensus formation of an associated mean-field dynamics by analyzing the time-evolution of the variance of the particle distribution, which acts as Lyapunov function of the dynamics. We then show that this consensus is close to a global minimizer by employing the asymptotic Laplace principle and a tractability condition on the energy landscape of the objective function. These results allow for the usage of memory mechanisms, and hold for a rich class of objectives provided certain conditions of well-preparation of the hyperparameters and the initial datum. In a second step, at least for the case without memory effects, we provide a quantitative result about the mean-field approximation of particle swarm optimization, which specifies the convergence of the interacting particle system to the associated mean-field limit. Combining these two results allows for global convergence guarantees of the numerical particle swarm optimization method with provable polynomial complexity. To demonstrate the applicability of the method we propose an efficient and parallelizable implementation, which is tested in particular on a competitive and well-understood high-dimensional benchmark problem in machine learning.
引用
收藏
相关论文
共 63 条
  • [1] van den Bergh F(2010)A convergence proof for the particle swarm optimiser Fund. Inform. 105 341-374
  • [2] Engelbrecht AP(2011)Stochastic mean-field limit: non-Lipschitz forces and swarming Math. Models Methods Appl. Sci. 21 2179-2210
  • [3] Bolley F(2018)An analytical framework for consensus-based global optimization method Math. Models Methods Appl. Sci. 28 1037-1066
  • [4] Cañizo JA(2022)Zero-inertia limit: from particle swarm optimization to consensus-based optimization SIAM J. Math. Anal. 54 3091-3121
  • [5] Carrillo JA(2002)The particle swarm—explosion, stability, and convergence in a multidimensional complex space IEEE Trans. Evol. Comput. 6 58-73
  • [6] Carrillo JA(2020)Consensus-based optimization on hypersurfaces: well-posedness and mean-field limit Math. Models Methods Appl. Sci. 30 2725-2751
  • [7] Choi YP(2021)Consensus-based optimization on the sphere: convergence to global minimizers and machine learning J. Mach. Learn. Res. 22 1-55
  • [8] Totzeck C(2021)From particle swarm optimization to consensus based optimization: stochastic modeling and mean-field limit Math. Models Methods Appl. Sci. 31 1625-1657
  • [9] Tse O(2001)An algorithmic introduction to numerical simulation of stochastic differential equations SIAM Rev. 43 525-546
  • [10] Cipriani C(2020)On the mean-field limit for the Vlasov-Poisson-Fokker-Planck system J. Stat. Phys. 181 1915-1965