共 31 条
Quasi-Newton type proximal gradient method for nonconvex nonsmooth composite optimization problems
被引:0
作者:
Wang, Tanxing
[1
]
Jiang, Yaning
[2
]
Cai, Xingju
[1
]
机构:
[1] Nanjing Normal Univ, Sch Math Sci, Minist Educ, Key Lab NSLSCS, Nanjing 210023, Peoples R China
[2] Nanjing Univ Finance & Econ, Sch Appl Math, Nanjing 210023, Peoples R China
基金:
中国国家自然科学基金;
关键词:
Proximal gradient method;
Variable metric;
Quasi-Newton method;
Line search;
Nonconvex nonsmooth composite optimization;
ALGORITHM;
MINIMIZATION;
CONVERGENCE;
D O I:
10.1007/s10898-025-01511-7
中图分类号:
C93 [管理学];
O22 [运筹学];
学科分类号:
070105 ;
12 ;
1201 ;
1202 ;
120202 ;
摘要:
In this paper, we propose two quasi-Newton type proximal gradient methods for a class of nonconvex nonsmooth composite optimization problems, where the objective function is the sum of a smooth nonconvex function and a strictly increasing concave differentiable function composited with a convex nonsmooth function. The first proposed method is called quasi-Newton proximal gradient (QNPG) method, where the variable metric of the proximal operator adopts a quasi-Newton update strategy. The global convergence of QNPG is established under the Kurdyka-& Lstrok;ojasiewicz framework. However, proximal operators with quasi-Newton matrices are not easy to compute for some practical problems. Therefore we further give a general framework for proximal gradient method. Such a framework relies on an implementable inexactness condition for the computation of the proximal operator and on a line search procedure, where the line search directions can be selected arbitrarily. We prove that the line search criterion is well defined and the convergence of subsequences. Additionally, numerical simulations on an image processing model demonstrate the feasibility and effectiveness of the proposed methods.
引用
收藏
页码:693 / 711
页数:19
相关论文