Feature Selection Using Neighborhood based Entropy

被引:4
作者
Farnaghi-Zadeh, Fatemeh [1 ]
Rahmani, Mohsen [1 ]
Amiri, Maryam [1 ]
机构
[1] Arak Univ, Fac Engn, Dept Comp Engn, Arak 3815688349, Iran
关键词
Feature Selection; Discrimination Index; Neighborhood Relations; Density; Entropy; Distinguishing Ability; EFFICIENT FEATURE-SELECTION; WORKLOAD PREDICTION; MUTUAL INFORMATION; ALGORITHM; RELEVANCE; MODEL; SET;
D O I
10.3897/jucs.79905
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Feature selection plays an important role as a preprocessing step for pattern recognition and machine learning. The goal of feature selection is to determine an optimal subset of relevant features out of a large number of features. The neighborhood discrimination index (NDI) is one of the newest and the most efficient measures to determine distinguishing ability of a feature subset. NDI is computed based on a neighborhood radius (E). Due to the significant impact of E on NDI, selecting an appropriate value of E for each data set might be challenging and very time-consuming. This paper proposes a new approach based on targEt PointS To computE neIghborhood relatioNs (EPSTEIN). At first, all the data points are sorted in the descending order of their density. Then, the highest density data points are selected as many as the number of classes. To determine the neighborhood relations, the circles centered on the target points are drawn and the points inside or on the circles are considered to be neighbors. In the next step, the significance of each feature is computed and a greedy algorithm selects appropriate features. The performance of the proposed approach is compared to both the commonest and newest methods of feature selection. The experimental results show that EPSTEIN could select more efficient subsets of features and improve the prediction accuracy of classifiers in comparison to the other state-of-the-art methods such as Correlation-based Feature Selection (CFS), Fast Correlation-Based Filter (FCBF), Heuristic Algorithm Based on Neighborhood Discrimination Index (HANDI), Ranking Based Feature Inclusion for Optimal Feature Subset (KNFI), Ranking Based Feature Elimination (KNFE) and Principal Component Analysis and Information Gain (PCA-IG).
引用
收藏
页码:1169 / 1192
页数:24
相关论文
共 53 条
[11]   A New Approach for Wrapper Feature Selection Using Genetic Algorithm for Big Data [J].
Bouaguel, Waad .
INTELLIGENT AND EVOLUTIONARY SYSTEMS, IES 2015, 2016, 5 :75-83
[12]   A novel wrapper method for feature selection and its applications [J].
Chen, Gang ;
Chen, Jin .
NEUROCOMPUTING, 2015, 159 :219-226
[13]   Multi-criteria handover management using entropy-based SAW method for SDN-based 5G small cells [J].
Cicioglu, Murtaza .
WIRELESS NETWORKS, 2021, 27 (04) :2947-2959
[14]   SUPPORT-VECTOR NETWORKS [J].
CORTES, C ;
VAPNIK, V .
MACHINE LEARNING, 1995, 20 (03) :273-297
[15]   An Uncertainty Measure for Incomplete Decision Tables and Its Applications [J].
Dai, Jianhua ;
Wang, Wentao ;
Xu, Qing .
IEEE TRANSACTIONS ON CYBERNETICS, 2013, 43 (04) :1277-1289
[16]  
Dong Guozhu, 2018, Feature Engineering for Machine Learning and Data Analytics
[17]  
Dorigo E. B. M., 1999, SWARM INTELLIGENCE N
[18]   Study on density peaks clustering based on k-nearest neighbors and principal component analysis [J].
Du, Mingjing ;
Ding, Shifei ;
Jia, Hongjie .
KNOWLEDGE-BASED SYSTEMS, 2016, 99 :135-145
[19]   Using reinforcement learning to find an optimal set of features [J].
Fard, Seyed Mehdi Hazrati ;
Hamzeh, Ali ;
Hashemi, Sattar .
COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2013, 66 (10) :1892-1904
[20]   Research on collaborative negotiation for e-commerce. [J].
Feng, YQ ;
Lei, Y ;
Li, Y ;
Cao, RZ .
2003 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-5, PROCEEDINGS, 2003, :2085-2088