Feature selection with scalable variational gaussian process via sensitivity analysis based on L2 divergence

被引:2
|
作者
Jeon, Younghwan [1 ]
Hwang, Ganguk [1 ]
机构
[1] Korea Adv Inst Sci & Technol KAIST, Dept Math Sci, Daejeon 305701, South Korea
基金
新加坡国家研究基金会;
关键词
Gaussian processes; Scalable variational gaussian process; Feature selection; L2; divergence; VARIABLE SELECTION; MODELS;
D O I
10.1016/j.neucom.2022.11.013
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Feature selection is one of the most important issues in supervised learning and there are a lot of different feature selection approaches in the literature. Among them one recent approach is to use Gaussian pro-cess (GP) because it can capture well the hidden relevance between the features of the input and the out-put. However, the existing feature selection approaches with GP suffer from the scalability problem due to high computational cost of inference with GP. Moreover, they use the Kullback-Leibler (KL) divergence in the sensitivity analysis for feature selection, but we show in this paper that the KL divergence under-estimates the relevance of important features in some cases of classification. To remedy such drawbacks of the existing GP based approaches, we propose a new feature selection method with scalable variational Gaussian process (SVGP) and L2 divergence. With the help of SVGP the proposed method exploits given large data sets well for feature selection through so-called inducing points while avoiding the scalability problem. Moreover, we provide theoretical analysis to motivate the choice of L2 divergence for feature selection in both classification and regression. To validate the perfor-mance of the proposed method, we compare it with other existing methods through experiments with synthetic and real data sets. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页码:577 / 592
页数:16
相关论文
共 50 条
  • [1] Feature Selection With l2,1-2 Regularization
    Shi, Yong
    Miao, Jianyu
    Wang, Zhengyu
    Zhang, Peng
    Niu, Lingfeng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (10) : 4967 - 4982
  • [2] Robust Feature Selection via Simultaneous Capped l2 -Norm and l2,1 -Norm Minimization
    Lan, Gongmin
    Hou, Chenping
    Yi, Dongyun
    PROCEEDINGS OF 2016 IEEE INTERNATIONAL CONFERENCE ON BIG DATA ANALYSIS (ICBDA), 2016, : 147 - 151
  • [3] Neural network input feature selection using structured l2 - norm penalization
    Egwu, Nathaniel
    Mrziglod, Thomas
    Schuppert, Andreas
    APPLIED INTELLIGENCE, 2023, 53 (05) : 5732 - 5749
  • [4] A Gaussian process embedded feature selection method based on automatic relevance determination
    Deng, Yushi
    Eden, Mario
    Cremaschi, Selen
    COMPUTERS & CHEMICAL ENGINEERING, 2024, 191
  • [5] A generalized l2,p-norm regression based feature selection algorithm
    Zhi, X.
    Liu, J.
    Wu, S.
    Niu, C.
    JOURNAL OF APPLIED STATISTICS, 2023, 50 (03) : 703 - 723
  • [6] Cost-sensitive feature selection via the l2,1-norm
    Zhao, Hong
    Yu, Shenglong
    INTERNATIONAL JOURNAL OF APPROXIMATE REASONING, 2019, 104 : 25 - 37
  • [7] Trace Ratio Criterion based Discriminative Feature Selection via l2,p-norm regularization for supervised learning
    Zhao, Mingbo
    Lin, Mingquan
    Chiu, Bernard
    Zhang, Zhao
    Tang, Xue-song
    NEUROCOMPUTING, 2018, 321 : 1 - 16
  • [8] Feature Selection via Sensitivity Analysis of MLP Probabilistic Outputs
    Yang, Jian-Bo
    Shen, Kai-Quan
    Ong, Chong-Jin
    Li, Xiao-Ping
    2008 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), VOLS 1-6, 2008, : 774 - 779
  • [9] Feature selection via sensitivity analysis of SVM probabilistic outputs
    Shen, Kai-Quan
    Ong, Chong-Jin
    Li, Xiao-Ping
    Wilder-Smith, Einar P. V.
    MACHINE LEARNING, 2008, 70 (01) : 1 - 20
  • [10] Feature selection via sensitivity analysis of SVM probabilistic outputs
    Kai-Quan Shen
    Chong-Jin Ong
    Xiao-Ping Li
    Einar P. V. Wilder-Smith
    Machine Learning, 2008, 70 : 1 - 20