Convex Maximization via Adjustable Robust Optimization

被引:5
作者
Selvi, Aras [1 ]
Ben-Tal, Aharon [2 ,3 ,4 ]
Brekelmans, Ruud [5 ]
den Hertog, Dick [6 ]
机构
[1] Imperial Coll London, Imperial Coll Business Sch, London SW7 2AZ, England
[2] Technion Israel Inst Technol, Fac Ind Engn & Management, IL-3200003 Haifa, Israel
[3] Shenkar Coll, IL-52526 Ramat Gan, Israel
[4] Tilburg Univ, Ctr Econ & Business Res, NL-5037 AB Tilburg, Netherlands
[5] Tilburg Univ, Dept Econometr & Operat Res, NL-5037 AB Tilburg, Netherlands
[6] Univ Amsterdam, Fac Econ & Business, NL-1012 WX Amsterdam, Netherlands
关键词
nonlinear optimization; convex maximization; adjustable robust optimization; CONCAVE MINIMIZATION;
D O I
10.1287/ijoc.2021.1134
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Maximizing a convex function over convex constraints is an NP-hard problem in general. We prove that such a problem can be reformulated as an adjustable robust optimization (ARO) problem in which each adjustable variable corresponds to a unique constraint of the original problem. We use ARO techniques to obtain approximate solutions to the convex maximization problem. In order to demonstrate the complete approximation scheme, we distinguish the cases in which we have just one nonlinear constraint and multiple linear constraints. Concerning the first case, we give three examples in which one can analytically eliminate the adjustable variable and approximately solve the resulting static robust optimization problem efficiently. More specifically, we show that the norm constrained log-sum-exp (geometric) maximization problem can be approximated by (convex) exponential cone optimization techniques. Concerning the second case of multiple linear constraints, the equivalent ARO problem can be represented as an adjustable robust linear optimization problem. Using linear decision rules then returns a safe approximation of the constraints. The resulting problem is a convex optimization problem, and solving this problem gives an upper bound on the global optimum value of the original problem. By using the optimal linear decision rule, we obtain a lower bound solution as well. We derive the approximation problems explicitly for quadratic maximization, geometric maximization, and sum-of-max-linear-terms maximization problems with multiple linear constraints. Numerical experiments show that, contrary to the state-of-the-art solvers, we can approximate large-scale problems swiftly with tight bounds. In several cases, we have equal upper and lower bounds, which concludes that we have global optimality guarantees in these cases.
引用
收藏
页码:2091 / 2105
页数:15
相关论文
共 39 条
  • [1] One algorithm for branch and bound method for solving concave optimization problem
    Andrianova, A. A.
    Korepanova, A. A.
    Halilova, I. F.
    [J]. 11TH INTERNATIONAL CONFERENCE ON MESH METHODS FOR BOUNDRY-VALUE PROBLEMS AND APPLICATIONS, 2016, 158
  • [2] Robust Optimization of Sums of Piecewise Linear Functions with Application to Inventory Problems
    Ardestani-Jaafari, Amir
    Delage, Erick
    [J]. OPERATIONS RESEARCH, 2016, 64 (02) : 474 - 494
  • [3] Audet C, 2005, ESSAYS SURVEYS GLOBA
  • [4] Benson HP, 1995, HDB GLOBAL OPTIMIZAT, P43
  • [5] Computing robust basestock levels
    Bienstock, Daniel
    Ozbay, Nuri
    [J]. DISCRETE OPTIMIZATION, 2008, 5 (02) : 389 - 414
  • [6] Boyd S., 2004, CONVEX OPTIMIZATION
  • [7] Boyd S., 2007, Stanford University, V635, P1
  • [8] Byrd RH, 2006, NONCONVEX OPTIM, V83, P35
  • [9] Goemans M. X., 1994, Proceedings of the Twenty-Sixth Annual ACM Symposium on the Theory of Computing, P422, DOI 10.1145/195058.195216
  • [10] Gurobi Optimization L, 2018, GUROBI OPTIMIZER REF