An Empirical Evaluation of Feature Selection Stability and Classification Accuracy

被引:0
|
作者
Buyukkececi, Mustafa [1 ]
Okur, Mehmet Cudi [2 ]
机构
[1] Univerlist, Izmir, Turkiye
[2] Yasar Univ, Fac Engn, Dept Software Engn, Izmir, Turkiye
来源
GAZI UNIVERSITY JOURNAL OF SCIENCE | 2024年 / 37卷 / 02期
关键词
Feature selection; Selection stability; Classification accuracy; Filter methods; Wrapper methods; ALGORITHMS; BIAS;
D O I
10.35378/gujs.998964
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
The performance of inductive learners can be negatively affected by high -dimensional datasets. To address this issue, feature selection methods are used. Selecting relevant features and reducing data dimensions is essential for having accurate machine learning models. Stability is an important criterion in feature selection. Stable feature selection algorithms maintain their feature preferences even when small variations exist in the training set. Studies have emphasized the importance of stable feature selection, particularly in cases where the number of samples is small and the dimensionality is high. In this study, we evaluated the relationship between stability measures, as well as, feature selection stability and classification accuracy, using the Pearson 's Correlation Coefficient (also known as Pearson 's Product -Moment Correlation Coefficient or simply Pearson's r ). We conducted an extensive series of experiments using five filter and two wrapper feature selection methods, three classifiers for subset and classification performance evaluation, and eight real -world datasets taken from two different data repositories. We measured the stability of feature selection methods using a total of twelve stability metrics. Based on the results of correlation analyses, we have found that there is a lack of substantial evidence supporting a linear relationship between feature selection stability and classification accuracy. However, a strong positive correlation has been observed among several stability metrics.
引用
收藏
页码:606 / 620
页数:15
相关论文
共 50 条
  • [1] Empirical evaluation of feature selection methods in classification
    Cehovin, Luka
    Bosnic, Zoran
    INTELLIGENT DATA ANALYSIS, 2010, 14 (03) : 265 - 281
  • [2] Empirical study of feature selection methods based on individual feature evaluation for classification problems
    Arauzo-Azofra, Antonio
    Aznarte, Jose Luis
    Benitez, Jose M.
    EXPERT SYSTEMS WITH APPLICATIONS, 2011, 38 (07) : 8170 - 8177
  • [3] New feature selection and voting scheme to improve classification accuracy
    Tsai, Cheng-Jung
    SOFT COMPUTING, 2019, 23 (22) : 12017 - 12030
  • [4] An Empirical Evaluation of Constrained Feature Selection
    Bach J.
    Zoller K.
    Trittenbach H.
    Schulz K.
    Böhm K.
    SN Computer Science, 3 (6)
  • [5] Data-driven Feature Selection Methods for Text Classification: an Empirical Evaluation
    Fragoso, Rogerio C. P.
    Pinheiro, Roberto H. W.
    Cavalcanti, George D. C.
    JOURNAL OF UNIVERSAL COMPUTER SCIENCE, 2019, 25 (04) : 334 - 360
  • [6] A Comprehensive Review of Feature Selection and Feature Selection Stability in Machine Learning
    Buyukkececi, Mustafa
    Okur, Mehmet Cudi
    GAZI UNIVERSITY JOURNAL OF SCIENCE, 2023, 36 (04): : 1506 - 1520
  • [7] On the Stability of Feature Selection Methods in Software Quality Prediction: An Empirical Investigation
    Wang, Huanjing
    Khoshgoftaar, Taghi M.
    Seliya, Naeem
    INTERNATIONAL JOURNAL OF SOFTWARE ENGINEERING AND KNOWLEDGE ENGINEERING, 2015, 25 (9-10) : 1467 - 1490
  • [8] The Effects of Privacy Preserving Data Publishing based on Overlapped Slicing on Feature Selection Stability and Accuracy
    Chelvan, Mohana P.
    Perumal, K.
    INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2020, 11 (12) : 161 - 167
  • [9] A comparative study on the effect of feature selection on classification accuracy
    Karabulut, Esra Mahsereci
    Ozel, Selma Ayse
    Ibrikci, Turgay
    FIRST WORLD CONFERENCE ON INNOVATION AND COMPUTER SCIENCES (INSODE 2011), 2012, 1 : 323 - 327
  • [10] A Leave-One-Feature-Out Wrapper Method for Feature Selection in Data Classification
    Liu, Jianguo
    Danait, Neil
    Hu, Shawn
    Sengupta, Sayon
    PROCEEDINGS OF THE 2013 6TH INTERNATIONAL CONFERENCE ON BIOMEDICAL ENGINEERING AND INFORMATICS (BMEI 2013), VOLS 1 AND 2, 2013, : 656 - 660