Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization

被引:0
作者
Szilárd Csaba László
机构
[1] Technical University of Cluj-Napoca,Department of Mathematics
来源
Mathematical Programming | 2021年 / 190卷
关键词
inertial algorithm; Non-convex optimization; Kurdyka–Łojasiewicz inequality; Convergence rate; Łojasiewicz exponent; 90C26; 90C30; 65K10;
D O I
暂无
中图分类号
学科分类号
摘要
We investigate an inertial algorithm of gradient type in connection with the minimization of a non-convex differentiable function. The algorithm is formulated in the spirit of Nesterov’s accelerated convex gradient method. We prove some abstract convergence results which applied to our numerical scheme allow us to show that the generated sequences converge to a critical point of the objective function, provided a regularization of the objective function satisfies the Kurdyka–Łojasiewicz property. Further, we obtain convergence rates for the generated sequences and the objective function values formulated in terms of the Łojasiewicz exponent of a regularization of the objective function. Finally, some numerical experiments are presented in order to compare our numerical scheme and some algorithms well known in the literature.
引用
收藏
页码:285 / 329
页数:44
相关论文
共 50 条
[41]   Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization [J].
Juan Gao ;
Xin-Wei Liu ;
Yu-Hong Dai ;
Yakui Huang ;
Junhua Gu .
Computational Optimization and Applications, 2023, 84 :531-572
[42]   Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization [J].
Gao, Juan ;
Liu, Xin-Wei ;
Dai, Yu-Hong ;
Huang, Yakui ;
Gu, Junhua .
COMPUTATIONAL OPTIMIZATION AND APPLICATIONS, 2023, 84 (02) :531-572
[43]   Probabilistic Guarantees of Stochastic Recursive Gradient in Non-convex Finite Sum Problems [J].
Zhong, Yanjie ;
Li, Jiaqi ;
Lahiri, Soumendra .
ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT III, PAKDD 2024, 2024, 14647 :142-154
[44]   Direct, fast and convergent solvers for the non-convex and non-smooth TDoA localization problem [J].
Gur, Eyal ;
Amar, Alon ;
Sabach, Shoham .
DIGITAL SIGNAL PROCESSING, 2023, 139
[45]   Reduced subgradient bundle method for linearly constrained non-smooth non-convex problems [J].
El Ghali, A. ;
El Moudden, M. .
OPTIMIZATION, 2021, 70 (10) :2103-2130
[46]   DOUBLE-INERTIAL PROXIMAL GRADIENT ALGORITHM FOR DIFFERENCE-OF-CONVEX PROGRAMMING [J].
Wang, Tanxing ;
Cai, Xingju ;
Song, Yongzhong ;
Gao, Xue .
PACIFIC JOURNAL OF OPTIMIZATION, 2022, 18 (02) :415-437
[47]   Stochastic variance reduced gradient with hyper-gradient for non-convex large-scale learning [J].
Yang, Zhuang .
APPLIED INTELLIGENCE, 2023, 53 (23) :28627-28641
[48]   Stochastic variance reduced gradient with hyper-gradient for non-convex large-scale learning [J].
Zhuang Yang .
Applied Intelligence, 2023, 53 :28627-28641
[49]   The Optimal Convergence Rate of Adam-Type Algorithms for Non-Smooth Strongly Convex Cases [J].
Long S. ;
Tao W. ;
Zhang Z.-D. ;
Tao Q. .
Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2022, 50 (09) :2049-2059
[50]   Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates [J].
Menart, Michael ;
Ullah, Enayat ;
Arora, Raman ;
Bassily, Raef ;
Guzman, Cristobal .
INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 237, 2024, 237