Robust linear classification from limited training data

被引:2
作者
Chakrabarti, Deepayan [1 ]
机构
[1] Univ Texas Austin, McCombs Sch Business, Austin, TX 78712 USA
关键词
Classification; Principal components; Maximum entropy; Robust optimization; REGRESSION; OPTIMIZATION;
D O I
10.1007/s10994-021-06093-5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We consider the problem of linear classification under general loss functions in the limited-data setting. Overfitting is a common problem here. The standard approaches to prevent overfitting are dimensionality reduction and regularization. But dimensionality reduction loses information, while regularization requires the user to choose a norm, or a prior, or a distance metric. We propose an algorithm called RoLin that needs no user choice and applies to a large class of loss functions. RoLin combines "reliable" information from the top principal components with a robust optimization to extract any useful information from "unreliable" subspaces. It also includes a new robust cross-validation that is better than existing cross-validation methods in the limited-data setting. Experiments on 25 real-world datasets and three standard loss functions show that RoLin broadly outperforms both dimensionality reduction and regularization. Dimensionality reduction has 14%-40% worse test loss on average as compared to RoLin. Against L-1 and L-2 regularization, RoLin can be up to 3x better for logistic loss and 12x better for squared hinge loss. The differences are greatest for small sample sizes, where RoLin achieves the best loss on 2x to 3x more datasets than any competing method. For some datasets, RoLin with 15 training samples is better than the best norm-based regularization with 1500 samples.
引用
收藏
页码:1621 / 1649
页数:29
相关论文
共 36 条
[1]  
[Anonymous], 2014, Annals of Data Science
[2]  
Ben-Tal A, 2009, PRINC SER APPL MATH, P3
[3]  
Bhattacharyya C, 2004, PROCEEDINGS OF INTERNATIONAL CONFERENCE ON INTELLIGENT SENSING AND INFORMATION PROCESSING, P433
[4]  
Bi J, 2004, ADV NEURAL INF PROCE, V17, P161
[5]   SMOTE for high-dimensional class-imbalanced data [J].
Blagus, Rok ;
Lusa, Lara .
BMC BIOINFORMATICS, 2013, 14
[6]  
Cover T.M., 1991, Elements of Information Theory
[7]   ROTATION OF EIGENVECTORS BY A PERTURBATION .3. [J].
DAVIS, C ;
KAHAN, WM .
SIAM JOURNAL ON NUMERICAL ANALYSIS, 1970, 7 (01) :1-&
[8]  
De Brabanter J, 2002, LECT NOTES COMPUT SC, V2415, P713
[9]   Distributionally Robust Optimization Under Moment Uncertainty with Application to Data-Driven Problems [J].
Delage, Erick ;
Ye, Yinyu .
OPERATIONS RESEARCH, 2010, 58 (03) :595-612
[10]   Robust solutions to least-squares problems with uncertain data [J].
ElGhaoui, L ;
Lebret, H .
SIAM JOURNAL ON MATRIX ANALYSIS AND APPLICATIONS, 1997, 18 (04) :1035-1064