Globally convergent Newton-type methods for multiobjective optimization

被引:19
作者
Goncalves, M. L. N. [1 ]
Lima, F. S. [1 ]
Prudente, L. F. [1 ]
机构
[1] Univ Fed Goias, IME, Campus 2,Caixa Postal 131, BR-74001970 Goiania, Go, Brazil
关键词
Multiobjective optimization; Newton method; Global convergence; Numerical experiments; PROJECTED GRADIENT-METHOD; VECTOR OPTIMIZATION; LINE SEARCHES; PARETO SET;
D O I
10.1007/s10589-022-00414-7
中图分类号
C93 [管理学]; O22 [运筹学];
学科分类号
070105 ; 12 ; 1201 ; 1202 ; 120202 ;
摘要
We propose two Newton-type methods for solving (possibly) nonconvex unconstrained multiobjective optimization problems. The first is directly inspired by the Newton method designed to solve convex problems, whereas the second uses second-order information of the objective functions with ingredients of the steepest descent method. One of the key points of our approaches is to impose some safeguard strategies on the search directions. These strategies are associated to the conditions that prevent, at each iteration, the search direction to be too close to orthogonality with the multiobjective steepest descent direction and require a proportionality between the lengths of such directions. In order to fulfill the demanded safeguard conditions on the search directions of Newton-type methods, we adopt the technique in which the Hessians are modified, if necessary, by adding multiples of the identity. For our first Newton-type method, it is also shown that, under convexity assumptions, the local superlinear rate of convergence (or quadratic, in the case where the Hessians of the objectives are Lipschitz continuous) to a local efficient point of the given problem is recovered. The global convergences of the aforementioned methods are based, first, on presenting and establishing the global convergence of a general algorithm and, then, showing that the new methods fall in this general algorithm. Numerical experiments illustrating the practical advantages of the proposed Newton-type schemes are presented.
引用
收藏
页码:403 / 434
页数:32
相关论文
共 41 条
  • [1] Anderson E., 1999, LAPACK USERS GUIDE, V3rd
  • [2] A modified Quasi-Newton method for vector optimization problem
    Ansary, Md A. T.
    Panda, G.
    [J]. OPTIMIZATION, 2015, 64 (11) : 2289 - 2306
  • [3] Conditional gradient method for multiobjective optimization
    Assuncao, P. B.
    Ferreira, O. P.
    Prudente, L. F.
    [J]. COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2021, 78 (03) : 741 - 768
  • [4] Birgin EG, 2014, FUND ALGORITHMS, P1, DOI 10.1137/1.9781611973365
  • [5] Proximal methods in vector optimization
    Bonnel, H
    Iusem, AN
    Svaiter, BF
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2005, 15 (04) : 953 - 970
  • [6] DIRECT MULTISEARCH FOR MULTIOBJECTIVE OPTIMIZATION
    Custodio, A. L.
    Madeira, J. F. A.
    Vaz, A. I. F.
    Vicente, L. N.
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2011, 21 (03) : 1109 - 1140
  • [7] Dai YH, 2003, SIAM J OPTIMIZ, V13, P693
  • [8] A perfect example for the BFGS method
    Dai, Yu-Hong
    [J]. MATHEMATICAL PROGRAMMING, 2013, 138 (1-2) : 501 - 530
  • [9] Normal-boundary intersection: A new method for generating the Pareto surface in nonlinear multicriteria optimization problems
    Das, I
    Dennis, JE
    [J]. SIAM JOURNAL ON OPTIMIZATION, 1998, 8 (03) : 631 - 657
  • [10] Benchmarking optimization software with performance profiles
    Dolan, ED
    Moré, JJ
    [J]. MATHEMATICAL PROGRAMMING, 2002, 91 (02) : 201 - 213