Gift: granularity over specific-class for feature selection

被引:8
作者
Ba, Jing [1 ]
Liu, Keyu [1 ,2 ]
Yang, Xibei [1 ,3 ]
Qian, Yuhua [4 ]
机构
[1] Jiangsu Univ Sci & Technol, Sch Comp, Changhui Rd, Zhenjiang 212100, Jiangsu, Peoples R China
[2] Southwest Jiaotong Univ, Sch Comp & Artificial Intelligence, Xian Rd, Chengdu 611756, Sichuan, Peoples R China
[3] Zhejiang Ocean Univ, Key Lab Oceanog Big Data Min & Applicat Zhejiang P, Lincheng St, Zhoushan 316022, Zhejiang, Peoples R China
[4] Shanxi Univ, Sch Comp & Informat Technol, Wucheng Rd, Taiyuan 030006, Shanxi, Peoples R China
关键词
Feature selection; Gift; Granularity; Specific-class; ROUGH SET; KNOWLEDGE GRANULATION; CLASSIFIERS;
D O I
10.1007/s10462-023-10499-z
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As a fundamental material of Granular Computing, information granulation sheds new light on the topic of feature selection. Although information granulation has been effectively applied to feature selection, existing feature selection methods lack the characterization of feature potential. Such an ability is one of the important factors in evaluating the importance of features, which determines whether candidate features have sufficient ability to distinguish different target variables. In view of this, a novel concept of granularity over specific-class from the perspective of information granulation is proposed. Essentially, such a granularity is a fusion of intra-class and extra-class based granularities, which enables to exploit the discrimination ability of features. Accordingly, an intuitive yet effective framework named Gift, i.e., granularity over specific-class for feature selection, is proposed. Comprehensive experiments on 29 public datasets clearly validate the effectiveness of Gift as compared with other feature selection strategies, especially in noisy data.
引用
收藏
页码:12201 / 12232
页数:32
相关论文
共 42 条
  • [1] An Intelligent Metaheuristic Binary Pigeon Optimization-Based Feature Selection and Big Data Classification in a MapReduce Environment
    Abukhodair, Felwa
    Alsaggaf, Wafaa
    Jamal, Amani Tariq
    Abdel-Khalek, Sayed
    Mansour, Romany F.
    [J]. MATHEMATICS, 2021, 9 (20)
  • [2] Wild patterns: Ten years after the rise of adversarial machine learning
    Biggio, Battista
    Roli, Fabio
    [J]. PATTERN RECOGNITION, 2018, 84 : 317 - 331
  • [3] A probabilistic learning algorithm for robust modeling using neural networks with random weights
    Cao, Feilong
    Ye, Hailiang
    Wang, Dianhui
    [J]. INFORMATION SCIENCES, 2015, 313 : 62 - 78
  • [4] Attribute group for attribute reduction
    Chen, Yan
    Liu, Keyu
    Song, Jingjing
    Fujita, Hamido
    Yang, Xibei
    Qian, Yuhua
    [J]. INFORMATION SCIENCES, 2020, 535 : 64 - 80
  • [5] Demsar J, 2006, J MACH LEARN RES, V7, P1
  • [6] A unified low-order information-theoretic feature selection framework for multi-label learning
    Gao, Wanfu
    Hao, Pingting
    Wu, Yang
    Zhang, Ping
    [J]. PATTERN RECOGNITION, 2023, 134
  • [7] Feature-specific mutual information variation for multi-label feature selection
    Hu, Liang
    Gao, Lingbo
    Li, Yonghao
    Zhang, Ping
    Gao, Wanfu
    [J]. INFORMATION SCIENCES, 2022, 593 : 449 - 471
  • [8] Neighborhood classifiers
    Hu, Qinghua
    Yu, Daren
    Me, Zongxia
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2008, 34 (02) : 866 - 876
  • [9] Markov cross-validation for time series model evaluations
    Jiang, Gaoxia
    Wang, Wenjian
    [J]. INFORMATION SCIENCES, 2017, 375 : 219 - 233
  • [10] Overcoming the myopia of inductive learning algorithms with RELIEFF
    Kononenko, I
    Simec, E
    RobnikSikonja, M
    [J]. APPLIED INTELLIGENCE, 1997, 7 (01) : 39 - 55