Semi-supervised feature extraction for EEG classification

被引:23
作者
Tu, Wenting [1 ]
Sun, Shiliang [1 ]
机构
[1] E China Normal Univ, Dept Comp Sci & Technol, Shanghai 200241, Peoples R China
关键词
Semi-supervised learning; Feature extraction EEG classification; Extreme energy ratio; Regularization; Density ratio; BRAIN; REGULARIZATION;
D O I
10.1007/s10044-012-0298-2
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Two semi-supervised feature extraction methods are proposed for electroencephalogram (EEG) classification. They aim to alleviate two important limitations in brain-computer interfaces (BCIs). One is on the requirement of small training sets owing to the need of short calibration sessions. The second is the time-varying property of signals, e.g., EEG signals recorded in the training and test sessions often exhibit different discriminant features. These limitations are common in current practical applications of BCI systems and often degrade the performance of traditional feature extraction algorithms. In this paper, we propose two strategies to obtain semi-supervised feature extractors by improving a previous feature extraction method extreme energy ratio (EER). The two methods are termed semi-supervised temporally smooth EER and semi-supervised importance weighted EER, respectively. The former constructs a regularization term on the preservation of the temporal manifold of test samples and adds this as a constraint to the learning of spatial filters. The latter defines two kinds of weights by exploiting the distribution information of test samples and assigns the weights to training data points and trials to improve the estimation of covariance matrices. Both of these two methods regularize the spatial filters to make them more robust and adaptive to the test sessions. Experimental results on data sets from nine subjects with comparisons to the previous EER demonstrate their better capability for classification.
引用
收藏
页码:213 / 222
页数:10
相关论文
共 41 条
[1]  
[Anonymous], 2006, SEMISUPERVISED LEARN
[2]  
[Anonymous], 2003, ADV NEURAL INFORM PR
[3]   Semi-supervised learning on Riemannian manifolds [J].
Belkin, M ;
Niyogi, P .
MACHINE LEARNING, 2004, 56 (1-3) :209-239
[4]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[5]  
Cai D, 2007, P IEEE 11 INT C COMP, V110, P787
[6]   Online Nonnegative Matrix Factorization With Robust Stochastic Approximation [J].
Guan, Naiyang ;
Tao, Dacheng ;
Luo, Zhigang ;
Yuan, Bo .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (07) :1087-1099
[7]   NeNMF: An Optimal Gradient Method for Nonnegative Matrix Factorization [J].
Guan, Naiyang ;
Tao, Dacheng ;
Luo, Zhigang ;
Yuan, Bo .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2012, 60 (06) :2882-2898
[8]   Manifold Regularized Discriminative Nonnegative Matrix Factorization With Fast Gradient Descent [J].
Guan, Naiyang ;
Tao, Dacheng ;
Luo, Zhigang ;
Yuan, Bo .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2011, 20 (07) :2030-2048
[9]  
Hill NJ, 2006, LECT NOTES COMPUT SC, V4174, P404
[10]   Statistical pattern recognition: A review [J].
Jain, AK ;
Duin, RPW ;
Mao, JC .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2000, 22 (01) :4-37