Inequality constrained stochastic nonlinear optimization via active-set sequential quadratic programming

被引:14
作者
Na, Sen [1 ,2 ]
Anitescu, Mihai [3 ]
Kolar, Mladen [4 ]
机构
[1] Univ Calif Berkeley, Dept Stat, Berkeley, CA 94720 USA
[2] Int Comp Sci Inst, Berkeley, CA 94704 USA
[3] Argonne Natl Lab, Math & Comp Sci Div, Argonne, WI USA
[4] Univ Chicago, Booth Sch Business, Chicago, IL USA
关键词
Inequality constraints; Stochastic optimization; Exact augmented Lagrangian; Sequential quadratic programming; AUGMENTED LAGRANGIAN FUNCTION; EXACT PENALTY-FUNCTION; PRIMAL-DUAL ALGORITHM; SAMPLE-SIZE; CONVERGENCE; COMPLEXITY;
D O I
10.1007/s10107-023-01935-7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
We study nonlinear optimization problems with a stochastic objective and deterministic equality and inequality constraints, which emerge in numerous applications including finance, manufacturing, power systems and, recently, deep neural networks. We propose an active-set stochastic sequential quadratic programming (StoSQP) algorithm that utilizes a differentiable exact augmented Lagrangian as the merit function. The algorithm adaptively selects the penalty parameters of the augmented Lagrangian, and performs a stochastic line search to decide the stepsize. The global convergence is established: for any initialization, the KKT residuals converge to zero almost surely. Our algorithm and analysis further develop the prior work of Na et al. (Math Program, 2022. https://doi.org/10.1007/s10107-022-01846-z). Specifically, we allow nonlinear inequality constraints without requiring the strict complementary condition; refine some of designs in Na et al. (2022) such as the feasibility error condition and the monotonically increasing sample size; strengthen the global convergence guarantee; and improve the sample complexity on the objective Hessian. We demonstrate the performance of the designed algorithm on a subset of nonlinear problems collected in CUTEst test set and on constrained logistic regression problems.
引用
收藏
页码:279 / 353
页数:75
相关论文
共 58 条
[1]   CONVERGENCE OF TRUST-REGION METHODS BASED ON PROBABILISTIC MODELS [J].
Bandeira, A. S. ;
Scheinberg, K. ;
Vicente, L. N. .
SIAM JOURNAL ON OPTIMIZATION, 2014, 24 (03) :1238-1264
[2]   GLOBAL CONVERGENCE RATE ANALYSIS OF A GENERIC LINE SEARCH ALGORITHM WITH NOISE [J].
Berahas, A. S. ;
Cao, L. ;
Scheinberg, K. .
SIAM JOURNAL ON OPTIMIZATION, 2021, 31 (02) :1489-1518
[3]  
Berahas A.S., 2022, ARXIV
[4]  
Berahas A.S., 2021, ARXIV
[5]   SEQUENTIAL QUADRATIC OPTIMIZATION FOR NONLINEAR EQUALITY CONSTRAINED STOCHASTIC OPTIMIZATION [J].
Berahas, Albert S. ;
Curtis, Frank E. ;
Robinson, Daniel ;
Zhou, Baoyu .
SIAM JOURNAL ON OPTIMIZATION, 2021, 31 (02) :1352-1379
[6]  
Bertsekas D. P, 2014, Constrained Optimization and Lagrange Multiplier Methods, DOI DOI 10.1016/C2013-0-10366-2
[7]  
Birge J. R., 1997, INFORMS Journal on Computing, V9, P111, DOI 10.1287/ijoc.9.2.111
[8]  
Blanchet J., 2019, INFORMS J OPTIM, V1, P92
[9]  
Boggs P. T., 1995, Acta numerica, V4, P1, DOI [10.1017/s0962492900002518, DOI 10.1017/S0962492900002518]
[10]   ADAPTIVE SAMPLING STRATEGIES FOR STOCHASTIC OPTIMIZATION [J].
Bollapragada, Raghu ;
Byrd, Richard ;
Nocedal, Jorge .
SIAM JOURNAL ON OPTIMIZATION, 2018, 28 (04) :3312-3343