Parallel two-phase methods for global optimization on GPU

被引:8
|
作者
Ferreiro, Ana M. [1 ,2 ]
Garcia-Rodriguez, Jose Antonio [1 ,2 ]
Vazquez, Carlos [1 ,2 ]
Costa e Silva, E. [3 ]
Correia, A. [3 ]
机构
[1] Fac Informat, Dept Math, CITIC, Campus Elvina S-N, La Coruna 15071, Spain
[2] ITMATI, La Coruna, Spain
[3] Porto Polytech, CIICESI ESTGF, Porto, Portugal
关键词
Global optimization; Basin Hopping; Conjugate gradient method; Parallelization; GPUs; MEAD SIMPLEX-METHOD; CONVERGENCE PROPERTIES; GENETIC ALGORITHM; PATTERN SEARCH; MINIMIZATION;
D O I
10.1016/j.matcom.2018.06.005
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Developing general global optimization algorithms is a difficult task, specially for functions with a huge number of local minima in high dimensions. Stochastic metaheuristic algorithms can provide the only alternative for the solution of such problems since they are aimed at guaranteeing global optimality. However, the main drawback of these algorithms is that they require a large number of function evaluations in order to skip/discard local optima, thus exhibiting a low convergence order and, as a result, a high computational cost. Furthermore, the situation can become even worse with the increase of dimension. Usually the number of local minima highly increases, as well as the computational cost of the function evaluation, thus increasing the difficulty for covering the whole search space. On the other hand, deterministic local optimization methods exhibit faster convergence rates, requiring a lower number of functions evaluations and therefore involving a lower computational cost, although they can get stuck into local minima. A way to obtain faster global optimization algorithms is to mix local and global methods in order to benefit from higher convergence rates of local ones, while retaining the global approximation properties. Another way to speedup global optimization algorithms comes from the use of efficient parallel hardware architectures. Nowadays, a good alternative is to take advantage of graphics processing units (GPUs), which are massively parallel processors and have become quite accessible cheap alternative for high performance computing. In this work a parallel implementation on GPUs of some hybrid two-phase optimization methods, that combine the metaheuristic Simulated Annealing algorithm for finding a global minimum, with different local optimization methods, namely a conjugate gradient algorithm and a version of Nelder-Mead method, is presented. The performance of parallelized versions of the above hybrid methods are analyzed for a set of well known test problems. Results show that GPUs represent an efficient alternative for the parallel implementation of two-phase global optimization methods. (C) 2018 International Association for Mathematics and Computers in Simulation (IMACS). Published by Elsevier B.V. All rights reserved.
引用
收藏
页码:67 / 90
页数:24
相关论文
共 50 条
  • [31] Two-Phase Neural Combinatorial Optimization with Reinforcement Learning for Agile Satellite Scheduling
    Zhao, Xuexuan
    Wang, Zhaokui
    Zheng, Gangtie
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2020, 17 (07): : 346 - 357
  • [32] GLOBAL MULTIDIMENSIONAL OPTIMIZATION ON PARALLEL COMPUTER
    STRONGIN, RG
    SERGEYEV, YD
    PARALLEL COMPUTING, 1992, 18 (11) : 1259 - 1273
  • [33] PARALLEL CUCKOO SEARCH FOR GLOBAL OPTIMIZATION
    Suwannarongsri, Supaporn
    INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2021, 17 (03): : 887 - 903
  • [34] Parallel Stochastic Global Optimization Using Radial Basis Functions
    Regis, Rommel G.
    Shoemaker, Christine A.
    INFORMS JOURNAL ON COMPUTING, 2009, 21 (03) : 411 - 426
  • [35] A parallel immune algorithm for global optimization
    Cutello, Vincenzo
    Nicosia, Giuseppe
    Pavia, Emilio
    INTELLIGENT INFORMATION PROCESSING AND WEB MINING, PROCEEDINGS, 2006, : 467 - +
  • [36] COMBINATION OF TWO UNDERESTIMATORS FOR UNIVARIATE GLOBAL OPTIMIZATION
    Ouanes, Mohand
    Chebbah, Mohammed
    Zidna, Ahmed
    RAIRO-OPERATIONS RESEARCH, 2018, 52 (01) : 177 - 186
  • [37] Equivalent methods for global optimization
    MacLagan, D
    Sturge, T
    Baritompa, W
    STATE OF THE ART IN GLOBAL OPTIMIZATION: COMPUTATIONAL METHODS AND APPLICATIONS, 1996, 7 : 201 - 211
  • [38] PARALLEL POPULATION BASED INCREMENTAL LEARNING WITH GPU ACCELERATION FOR NONLINEAR OPTIMIZATION
    Zhu, Weihang
    PROCEEDINGS OF THE ASME INTERNATIONAL DESIGN ENGINEERING TECHNICAL CONFERENCES AND COMPUTERS AND INFORMATION IN ENGINEERING CONFERENCE, VOL 5, PTS A AND B: 35TH DESIGN AUTOMATION CONFERENCE, 2010, : 489 - 496
  • [39] PARALLEL AND DISTRIBUTED METHODS FOR NONCONVEX OPTIMIZATION
    Scutari, G.
    Facchinei, F.
    Lampariello, L.
    Song, P.
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [40] Global Optimization Methods to Design Vacuum Electronic Devices
    Wang Huihui
    Meng Lin
    Liu Dagang
    Liu Laqun
    2016 IEEE INTERNATIONAL VACUUM ELECTRONICS CONFERENCE (IVEC), 2016,