Approximation Hardness for A Class of Sparse Optimization Problems

被引:0
作者
Chen, Yichen [1 ]
Ye, Yinyu [2 ]
Wang, Mengdi [3 ]
机构
[1] Princeton Univ, Dept Comp Sci, Princeton, NJ 08544 USA
[2] Stanford Univ, Dept Management Sci & Engn, Stanford, CA 94304 USA
[3] Princeton Univ, Dept Operat Res & Financial Engn, Princeton, NJ 08544 USA
关键词
nonconvex optimization; computational complexity; variable selection; NP-hardness; sparsity; VARIABLE SELECTION; PENALIZED LIKELIHOOD; SIGNALS;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we consider three typical optimization problems with a convex loss function and a nonconvex sparse penalty or constraint. For the sparse penalized problem, we prove that fi nding an O (n(c1)d(c2))-optimal solution to an n x d problem is strongly NP-hard for any c(1), c(2) epsilon [0; 1) such that c(1) + c(2) < 1. For two constrained versions of the sparse optimization problem, we show that it is intractable to approximately compute a solution path associated with increasing values of some tuning parameter. The hardness results apply to a broad class of loss functions and sparse penalties. They suggest that one cannot even approximately solve these three problems in polynomial time, unless P = NP.
引用
收藏
页数:27
相关论文
共 34 条