A Clustering-Based Approach to Reduce Feature Redundancy

被引:1
作者
de Amorim, Renato Cordeiro [1 ]
Mirkin, Boris [2 ]
机构
[1] Univ Hertfordshire, Sch Comp Sci, Coll Lane Campus, Hatfield AL10 9AB, Herts, England
[2] Birkbeck Univ London, Dept Comp Sci & Informat Syst, Malet St, London WC1E 7HX, England
来源
KNOWLEDGE, INFORMATION AND CREATIVITY SUPPORT SYSTEMS: RECENT TRENDS, ADVANCES AND SOLUTIONS, KICSS 2013 | 2016年 / 364卷
关键词
Unsupervised feature selection; Feature weighting; Redundant features; Clustering; Mental task separation; FEATURE-SELECTION; VARIABLES;
D O I
10.1007/978-3-319-19090-7_35
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Research effort has recently focused on designing feature weighting clustering algorithms. These algorithms automatically calculate the weight of each feature, representing their degree of relevance, in a data set. However, since most of these evaluate one feature at a time they may have difficulties to cluster data sets containing features with similar information. If a group of features contain the same relevant information, these clustering algorithms set high weights to each feature in this group, instead of removing some because of their redundant nature. This paper introduces an unsupervised feature selection method that can be used in the data pre-processing step to reduce the number of redundant features in a data set. This method clusters similar features together and then selects a subset of representative features for each cluster. This selection is based on the maximum information compression index between each feature and its respective cluster centroid. We present an empirical validation for our method by comparing it with a popular unsupervised feature selection on three EEG data sets. We find that our method selects features that produce better cluster recovery, without the need for an extra user-defined parameter.
引用
收藏
页码:465 / 475
页数:11
相关论文
共 36 条
[11]  
de Amorim R.C., 2015, J CLASSIF IN PRESS, V32
[12]   Minkowski metric, feature weighting and anomalous cluster initializing in K-Means clustering [J].
de Amorim, Renato Cordeiro ;
Mirkin, Boris .
PATTERN RECOGNITION, 2012, 45 (03) :1061-1075
[13]   SYNTHESIZED CLUSTERING - A METHOD FOR AMALGAMATING ALTERNATIVE CLUSTERING BASES WITH DIFFERENTIAL WEIGHTING OF VARIABLES [J].
DESARBO, WS ;
CARROLL, JD ;
CLARK, LA ;
GREEN, PE .
PSYCHOMETRIKA, 1984, 49 (01) :57-78
[14]   OPTIMAL VARIABLE WEIGHTING FOR ULTRAMETRIC AND ADDITIVE TREE CLUSTERING [J].
DESOETE, G .
QUALITY & QUANTITY, 1986, 20 (2-3) :169-180
[16]   Unsupervised learning of prototypes and attribute weights [J].
Frigui, H ;
Nasraoui, O .
PATTERN RECOGNITION, 2004, 37 (03) :567-581
[17]  
Guyon I, 2003, J MACH LEARN RES, V3, P1157, DOI DOI 10.1162/153244303322753616
[18]  
Huang JZ, 2008, CH CRC DATA MIN KNOW, P193
[19]   Automated variable weighting in k-means type clustering [J].
Huang, JZX ;
Ng, MK ;
Rong, HQ ;
Li, ZC .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2005, 27 (05) :657-668
[20]   Data clustering: 50 years beyond K-means [J].
Jain, Anil K. .
PATTERN RECOGNITION LETTERS, 2010, 31 (08) :651-666