Convergence and Error Bound for Perturbation of Linear Programs

被引:0
|
作者
Paul Tseng
机构
[1] University of Washington,Department of Mathematics
来源
Computational Optimization and Applications | 1999年 / 13卷
关键词
linear program; perturbation; convergence; error bound;
D O I
暂无
中图分类号
学科分类号
摘要
In various penalty/smoothing approaches to solving a linear program, one regularizes the problem by adding to the linear cost function a separable nonlinear function multiplied by a small positive parameter. Popular choices of this nonlinear function include the quadratic function, the logarithm function, and the x ln(x)-entropy function. Furthermore, the solutions generated by such approaches may satisfy the linear constraints only inexactly and thus are optimal solutions of the regularized problem with a perturbed right-hand side. We give a general condition for such an optimal solution to converge to an optimal solution of the original problem as the perturbation parameter tends to zero. In the case where the nonlinear function is strictly convex, we further derive a local (error) bound on the distance from such an optimal solution to the limiting optimal solution of the original problem, expressed in terms of the perturbation parameter.
引用
收藏
页码:221 / 230
页数:9
相关论文
共 50 条