An Empirical Evaluation of Feature Selection Stability and Classification Accuracy

被引:0
|
作者
Buyukkececi, Mustafa [1 ]
Okur, Mehmet Cudi [2 ]
机构
[1] Univerlist, Izmir, Turkiye
[2] Yasar Univ, Fac Engn, Dept Software Engn, Izmir, Turkiye
来源
GAZI UNIVERSITY JOURNAL OF SCIENCE | 2024年 / 37卷 / 02期
关键词
Feature selection; Selection stability; Classification accuracy; Filter methods; Wrapper methods; ALGORITHMS; BIAS;
D O I
10.35378/gujs.998964
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The performance of inductive learners can be negatively affected by high -dimensional datasets. To address this issue, feature selection methods are used. Selecting relevant features and reducing data dimensions is essential for having accurate machine learning models. Stability is an important criterion in feature selection. Stable feature selection algorithms maintain their feature preferences even when small variations exist in the training set. Studies have emphasized the importance of stable feature selection, particularly in cases where the number of samples is small and the dimensionality is high. In this study, we evaluated the relationship between stability measures, as well as, feature selection stability and classification accuracy, using the Pearson 's Correlation Coefficient (also known as Pearson 's Product -Moment Correlation Coefficient or simply Pearson's r ). We conducted an extensive series of experiments using five filter and two wrapper feature selection methods, three classifiers for subset and classification performance evaluation, and eight real -world datasets taken from two different data repositories. We measured the stability of feature selection methods using a total of twelve stability metrics. Based on the results of correlation analyses, we have found that there is a lack of substantial evidence supporting a linear relationship between feature selection stability and classification accuracy. However, a strong positive correlation has been observed among several stability metrics.
引用
收藏
页码:606 / 620
页数:15
相关论文
共 50 条
  • [31] An Evolutionary Approach to Feature Selection and Classification
    Lung, Rodica Ioana
    Suciu, Mihai-Alexandru
    MACHINE LEARNING, OPTIMIZATION, AND DATA SCIENCE, LOD 2023, PT I, 2024, 14505 : 333 - 347
  • [32] Feature selection for the classification of traced neurons
    Lopez-Cabrera, Jose D.
    Lorenzo-Ginori, Juan, V
    JOURNAL OF NEUROSCIENCE METHODS, 2018, 303 : 41 - 54
  • [33] Hybrid feature selection for text classification
    Gunal, Serkan
    TURKISH JOURNAL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCES, 2012, 20 : 1296 - 1311
  • [34] Using Individual Feature Evaluation to Start Feature Subset Selection Methods for Classification
    Arauzo-Azofra, Antonio
    Molina-Baena, Jose
    Jimenez-Vilchez, Alfonso
    Luque-Rodriguez, Maria
    ICAART: PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE, VOL 2, 2017, : 607 - 614
  • [35] The Effects of Privacy Preserving Data Publishing based on Overlapped Slicing on Feature Selection Stability and Accuracy
    Chelvan M.P.
    Perumal K.
    Intl. J. Adv. Comput. Sci. Appl., 2020, 12 (161-167): : 161 - 167
  • [36] Stability of feature selection algorithm: A review
    Khaire, Utkarsh Mahadeo
    Dhanalakshmi, R.
    JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (04) : 1060 - 1073
  • [37] Bias and stability of single variable classifiers for feature ranking and selection
    Fakhraei, Shobeir
    Soltanian-Zadeh, Hamid
    Fotouhi, Farshad
    EXPERT SYSTEMS WITH APPLICATIONS, 2014, 41 (15) : 6945 - 6958
  • [38] Theoretical and empirical study on the potential inadequacy of mutual information for feature selection in classification
    Frenay, Benoit
    Doquire, Gauthier
    Verleysen, Michel
    NEUROCOMPUTING, 2013, 112 : 64 - 78
  • [39] An empirical study on the joint impact of feature selection and data resampling on imbalance classification
    Zhang, Chongsheng
    Soda, Paolo
    Bi, Jingjun
    Fan, Gaojuan
    Almpanidis, George
    Garcia, Salvador
    Ding, Weiping
    APPLIED INTELLIGENCE, 2023, 53 (05) : 5449 - 5461
  • [40] Feature selection inspired by human intelligence for improving classification accuracy of cancer types
    Shukla, Alok Kumar
    COMPUTATIONAL INTELLIGENCE, 2021, 37 (04) : 1571 - 1598