Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming

被引:14
作者
Na, Sen [1 ,2 ]
Anitescu, Mihai [3 ]
Kolar, Mladen [4 ]
机构
[1] Univ Calif Berkeley, Dept Stat, Berkeley, CA 94720 USA
[2] Int Comp Sci Inst, Berkeley, CA 94704 USA
[3] Argonne Natl Lab, Math & Comp Sci Div, Argonne, WI USA
[4] Univ Chicago, Booth Sch Business, Chicago, IL USA
关键词
Inequality constraints; Stochastic optimization; Exact augmented Lagrangian; Sequential quadratic programming; AUGMENTED LAGRANGIAN FUNCTION; EXACT PENALTY-FUNCTION; PRIMAL-DUAL ALGORITHM; SAMPLE-SIZE; CONVERGENCE; COMPLEXITY;
D O I
10.1007/s10107-023-01935-7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including finance, manufacturing, power systems and, recently, deep neural networks. We propose an active-set stochastic sequential quadratic programming (StoSQP) algorithm that utilizes a differentiable exact augmented Lagrangian as the merit function. The algorithm adaptively selects the penalty parameters of the augmented Lagrangian, and performs a stochastic line search to decide the stepsize. The global convergence is established: for any initialization, the KKT residuals converge to zero almost surely. Our algorithm and analysis further develop the prior work of Na et al. (Math Program, 2022. https://doi.org/10.1007/s10107-022-01846-z). Specifically, we allow nonlinear inequality constraints without requiring the strict complementary condition; refine some of designs in Na et al. (2022) such as the feasibility error condition and the monotonically increasing sample size; strengthen the global convergence guarantee; and improve the sample complexity on the objective Hessian. We demonstrate the performance of the designed algorithm on a subset of nonlinear problems collected in CUTEst test set and on constrained logistic regression problems.
引用
收藏
页码:279 / 353
页数:75
相关论文
共 58 条
[41]   RECURSIVE QUADRATIC-PROGRAMMING ALGORITHM THAT USES AN EXACT AUGMENTED LAGRANGIAN FUNCTION [J].
LUCIDI, S .
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 1990, 67 (02) :227-245
[42]   NEW RESULTS ON A CLASS OF EXACT AUGMENTED LAGRANGIANS [J].
LUCIDI, S .
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS, 1988, 58 (02) :259-282
[43]   NEW RESULTS ON A CONTINUOUSLY DIFFERENTIABLE EXACT PENALTY FUNCTION [J].
Lucidi, Stefano .
SIAM JOURNAL ON OPTIMIZATION, 1992, 2 (04) :558-574
[44]   A sequential quadratic programming algorithm with an additional equality constrained phase [J].
Luis Morales, Jose ;
Nocedal, Jorge ;
Wu, Yuchen .
IMA JOURNAL OF NUMERICAL ANALYSIS, 2012, 32 (02) :553-579
[45]  
Na S., 2021, Advances in Neural In-formation Processing Systems, V34, P12441
[46]  
Na S., 2022, ARXIV
[47]   An adaptive stochastic sequential quadratic programming with differentiable exact augmented lagrangians [J].
Na, Sen ;
Anitescu, Mihai ;
Kolar, Mladen .
MATHEMATICAL PROGRAMMING, 2023, 199 (1-2) :721-791
[48]  
Nocedal J, 2006, SPRINGER SER OPER RE, P1, DOI 10.1007/978-0-387-40065-5
[49]   Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture From Small Angle X-Ray Scattering Intensity Measurements [J].
Onuk, A. Emre ;
Akcakaya, Murat ;
Bardhan, Jaydeep P. ;
Erdogmus, Deniz ;
Brooks, Dana H. ;
Makowski, Lee .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2015, 63 (20) :5383-5394
[50]  
Oztoprak F., 2021, ARXIV