共 50 条
[41]
Distributed stochastic gradient tracking methods with momentum acceleration for non-convex optimization
[J].
Computational Optimization and Applications,
2023, 84
:531-572
[43]
Probabilistic Guarantees of Stochastic Recursive Gradient in Non-convex Finite Sum Problems
[J].
ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PT III, PAKDD 2024,
2024, 14647
:142-154
[46]
DOUBLE-INERTIAL PROXIMAL GRADIENT ALGORITHM FOR DIFFERENCE-OF-CONVEX PROGRAMMING
[J].
PACIFIC JOURNAL OF OPTIMIZATION,
2022, 18 (02)
:415-437
[48]
Stochastic variance reduced gradient with hyper-gradient for non-convex large-scale learning
[J].
Applied Intelligence,
2023, 53
:28627-28641
[49]
The Optimal Convergence Rate of Adam-Type Algorithms for Non-Smooth Strongly Convex Cases
[J].
Tien Tzu Hsueh Pao/Acta Electronica Sinica,
2022, 50 (09)
:2049-2059
[50]
Differentially Private Non-Convex Optimization under the KL Condition with Optimal Rates
[J].
INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, VOL 237,
2024, 237