Learning Sparse Classifiers: Continuous and Mixed Integer Optimization Perspectives

被引:0
作者
Dedieu, Antoine [1 ]
Hazimeh, Hussein [1 ]
Mazumder, Rahul [2 ]
机构
[1] MIT, Operat Res Ctr, 77 Massachusetts Ave, Cambridge, MA 02139 USA
[2] MIT, Operat Res Ctr, Ctr Stat, Sloan Sch Management, Cambridge, MA 02142 USA
基金
美国国家科学基金会;
关键词
sparsity; sparse classification; 10; regularization; mixed integer programming; GENERALIZED LINEAR-MODELS; LOGISTIC-REGRESSION; SUBSET-SELECTION; VARIABLE SELECTION; CONVERGENCE; ALGORITHMS; LASSO;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider a discrete optimization formulation for learning sparse classifiers, where the outcome depends upon a linear combination of a small subset of features. Recent work has shown that mixed integer programming (MIP) can be used to solve (to optimality) l(0)-regularized regression problems at scales much larger than what was conventionally considered possible. Despite their usefulness, MIP-based global optimization approaches are significantly slower than the relatively mature algorithms for l(1)-regularization and heuristics for nonconvex regularized problems. We aim to bridge this gap in computation times by developing new MIP-based algorithms for l(0)-regularized classification. We propose two classes of scalable algorithms: an exact algorithm that can handle p approximate to 50, 000 features in a few minutes, and approximate algorithms that can address instances with p approximate to 10(6) in times comparable to the fast l(1)-based algorithms. Our exact algorithm is based on the novel idea of integrality generation, which solves the original problem (with p binary variables) via a sequence of mixed integer programs that involve a small number of binary variables. Our approximate algorithms are based on coordinate descent and local combinatorial search. In addition, we present new estimation error bounds for a class of l(0)-regularized estimators. Experiments on real and synthetic data demonstrate that our approach leads to models with considerably improved statistical performance (especially variable selection) compared to competing methods.
引用
收藏
页数:47
相关论文
共 60 条
  • [1] [Anonymous], 2015, ADV NEURAL INF PROCE
  • [2] Bahmani S, 2013, J MACH LEARN RES, V14, P807
  • [3] ON THE CONVERGENCE OF BLOCK COORDINATE DESCENT TYPE METHODS
    Beck, Amir
    Tetruashvili, Luba
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (04) : 2037 - 2060
  • [4] SPARSITY CONSTRAINED NONLINEAR OPTIMIZATION: OPTIMALITY CONDITIONS AND ALGORITHMS
    Beck, Amir
    Eldar, Yonina C.
    [J]. SIAM JOURNAL ON OPTIMIZATION, 2013, 23 (03) : 1480 - 1509
  • [5] l1-PENALIZED QUANTILE REGRESSION IN HIGH-DIMENSIONAL SPARSE MODELS
    Belloni, Alexandre
    Chernozhukov, Victor
    [J]. ANNALS OF STATISTICS, 2011, 39 (01) : 82 - 130
  • [6] Mixed-integer nonlinear optimization
    Belotti, Pietro
    Kirches, Christian
    Leyffer, Sven
    Linderoth, Jeff
    Luedtke, James
    Mahajan, Ashutosh
    [J]. ACTA NUMERICA, 2013, 22 : 1 - 131
  • [7] Bertsekas D. P., 2016, THEORETICAL SOLUTION
  • [8] Bertsimas D., 2017, ARXIV PREPRINT ARXIV
  • [9] SPARSE HIGH-DIMENSIONAL REGRESSION: EXACT SCALABLE ALGORITHMS AND PHASE TRANSITIONS
    Bertsimas, Dimitris
    Van Parys, Bart
    [J]. ANNALS OF STATISTICS, 2020, 48 (01) : 300 - 323
  • [10] Logistic Regression: From Art to Science
    Bertsimas, Dimitris
    King, Angela
    [J]. STATISTICAL SCIENCE, 2017, 32 (03) : 367 - 384