Learning Sparse Classifiers: Continuous and Mixed Integer Optimization Perspectives

被引:0
作者
Dedieu, Antoine [1 ]
Hazimeh, Hussein [1 ]
Mazumder, Rahul [2 ]
机构
[1] MIT, Operat Res Ctr, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[2] MIT, Operat Res Ctr, Ctr Stat, Sloan Sch Management, Cambridge, MA 02142 USA
基金
美国国家科学基金会;
关键词
sparsity; sparse classification; 10; regularization; mixed integer programming; GENERALIZED LINEAR-MODELS; LOGISTIC-REGRESSION; SUBSET-SELECTION; VARIABLE SELECTION; CONVERGENCE; ALGORITHMS; LASSO;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a discrete optimization formulation for learning sparse classifiers, where the outcome depends upon a linear combination of a small subset of features. Recent work has shown that mixed integer programming (MIP) can be used to solve (to optimality) l(0)-regularized regression problems at scales much larger than what was conventionally considered possible. Despite their usefulness, MIP-based global optimization approaches are significantly slower than the relatively mature algorithms for l(1)-regularization and heuristics for nonconvex regularized problems. We aim to bridge this gap in computation times by developing new MIP-based algorithms for l(0)-regularized classification. We propose two classes of scalable algorithms: an exact algorithm that can handle p approximate to 50, 000 features in a few minutes, and approximate algorithms that can address instances with p approximate to 10(6) in times comparable to the fast l(1)-based algorithms. Our exact algorithm is based on the novel idea of integrality generation, which solves the original problem (with p binary variables) via a sequence of mixed integer programs that involve a small number of binary variables. Our approximate algorithms are based on coordinate descent and local combinatorial search. In addition, we present new estimation error bounds for a class of l(0)-regularized estimators. Experiments on real and synthetic data demonstrate that our approach leads to models with considerably improved statistical performance (especially variable selection) compared to competing methods.
引用
收藏
页数:47
相关论文
共 60 条
[1]  
[Anonymous], 2009, Advances in neural information processing systems
[2]  
[Anonymous], 2015, STAT LEARNING SPARSI, DOI DOI 10.1201/B18401
[3]  
[Anonymous], 2015, P NEURIPS
[4]  
Bahmani S, 2013, J MACH LEARN RES, V14, P807
[5]   ON THE CONVERGENCE OF BLOCK COORDINATE DESCENT TYPE METHODS [J].
Beck, Amir ;
Tetruashvili, Luba .
SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (04) :2037-2060
[6]   SPARSITY CONSTRAINED NONLINEAR OPTIMIZATION: OPTIMALITY CONDITIONS AND ALGORITHMS [J].
Beck, Amir ;
Eldar, Yonina C. .
SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (03) :1480-1509
[7]   l1-PENALIZED QUANTILE REGRESSION IN HIGH-DIMENSIONAL SPARSE MODELS [J].
Belloni, Alexandre ;
Chernozhukov, Victor .
ANNALS OF STATISTICS, 2011, 39 (01) :82-130
[8]   Mixed-integer nonlinear optimization [J].
Belotti, Pietro ;
Kirches, Christian ;
Leyffer, Sven ;
Linderoth, Jeff ;
Luedtke, James ;
Mahajan, Ashutosh .
ACTA NUMERICA, 2013, 22 :1-131
[9]  
Bertsekas D, 2016, Nonlinear Programming
[10]  
Bertsimas D., 2017, ARXIV PREPRINT ARXIV