Adaptive threshold-based classification of sparse high-dimensional data

被引:0
作者
Pavlenko, Tatjana [1 ]
Stepanova, Natalia [2 ]
Thompson, Lee [2 ]
机构
[1] Uppsala Univ, Dept Stat, Box 513, S-75120 Uppsala, Sweden
[2] Carleton Univ, Sch Math & Stat, 1125 Colonel By Dr, Ottawa, ON K1S 5B6, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
High-dimensional data; sparse vectors; adaptive threshold-based classification; asymptotically optimal classifier; HIGHER CRITICISM; SELECTION;
D O I
10.1214/22-EJS1998
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
We revisit the problem of designing an efficient binary classifier in a challenging high-dimensional framework. The model under study assumes some local dependence structure among feature variables represented by a block-diagonal covariance matrix with a growing number of blocks of an arbitrary, but fixed size. The blocks correspond to non-overlapping independent groups of strongly correlated features. To assess the relevance of a particular block in predicting the response, we introduce a measure of "signal strength" pertaining to each feature block. This measure is then used to specify a sparse model of our interest. We further propose a threshold-based feature selector which operates as a screen-and-clean scheme integrated into a linear classifier: the data is subject to screening and hard threshold cleaning to filter out the blocks that contain no signals. Asymptotic properties of the proposed classifiers are studied when the sample size n depends on the number of feature blocks b, and the sample size goes to infinity with b at a slower rate than b. The new classifiers, which are fully adaptive to unknown parameters of the model, are shown to perform asymptotically optimally in a large part of the classification region. The numerical study confirms good analytical properties of the new classifiers that compare favorably to the existing threshold-based procedure used in a similar context.
引用
收藏
页码:1952 / 1996
页数:45
相关论文
共 24 条
[11]   OPTIMAL CLASSIFICATION IN SPARSE GAUSSIAN GRAPHIC MODEL [J].
Fan, Yingying ;
Jin, Jiashun ;
Yao, Zhigang .
ANNALS OF STATISTICS, 2013, 41 (05) :2537-2571
[12]  
Genovese CR, 2012, J MACH LEARN RES, V13, P2107
[13]   Persistence in high-dimensional linear predictor selection and the virtue of overparametrization [J].
Greenshtein, E ;
Ritov, Y .
BERNOULLI, 2004, 10 (06) :971-988
[14]  
HAN CP, 1975, BIOMETRIKA, V62, P213
[15]  
INGSTER Y., 2014, J MATH SCI-U TOKYO, V199, P184
[16]  
INGSTER Y. I., 1997, MATH METHODS STAT, V6, P47
[17]   Classification of sparse high-dimensional vectors [J].
Ingster, Yuri I. ;
Pouet, Christophe ;
Tsybakov, Alexandre B. .
PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2009, 367 (1906) :4427-4448
[18]   ON A PROBLEM OF ADAPTIVE ESTIMATION IN GAUSSIAN WHITE-NOISE [J].
LEPSKII, OV .
THEORY OF PROBABILITY AND ITS APPLICATIONS, 1990, 35 (03) :454-466
[19]  
Petrov V. V., 1995, SEQUENCES INDEPENDEN
[20]  
Rao C. R., 1973, LINEAR STAT INFERENC