Gaussian Kernel Width Optimization for Sparse Bayesian Learning

被引:28
|
作者
Mohsenzadeh, Yalda [1 ]
Sheikhzadeh, Hamid [1 ]
机构
[1] Amirkabir Univ Technol, Dept Elect Engn, Tehran 1415854546, Iran
关键词
Adaptive kernel learning (AKL); expectation maximization (EM); kernel width optimization; regression; relevance vector machine (RVM); sparse Bayesian learning; supervised kernel methods; MACHINE; POSE;
D O I
10.1109/TNNLS.2014.2321134
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.
引用
收藏
页码:709 / 719
页数:11
相关论文
共 50 条
  • [21] SPARSE BAYESIAN LEARNING WITH MULTIPLE DICTIONARIES
    Nannuru, Santosh
    Gemba, Kay L.
    Gerstoft, Peter
    2017 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2017), 2017, : 1190 - 1194
  • [22] Alternative to Extended Block Sparse Bayesian Learning and Its Relation to Pattern-Coupled Sparse Bayesian Learning
    Wang, Lu
    Zhao, Lifan
    Rahardja, Susanto
    Bi, Guoan
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2018, 66 (10) : 2759 - 2771
  • [23] Efficient Sparse Generalized Multiple Kernel Learning
    Yang, Haiqin
    Xu, Zenglin
    Ye, Jieping
    King, Irwin
    Lyu, Michael R.
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (03): : 433 - 446
  • [24] An expanded sparse Bayesian learning method for polynomial chaos expansion
    Zhou, Yicheng
    Lu, Zhenzhou
    Cheng, Kai
    Shi, Yan
    MECHANICAL SYSTEMS AND SIGNAL PROCESSING, 2019, 128 : 153 - 171
  • [25] Computationally Efficient Sparse Bayesian Learning via Belief Propagation
    Tan, Xing
    Li, Jian
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2010, 58 (04) : 2010 - 2021
  • [26] Achieving the sparse acoustical holography via the sparse bayesian learning
    Yu, Liang
    Li, Zhixin
    Chu, Ning
    Mohammad-Djafari, Ali
    Guo, Qixin
    Wang, Rui
    APPLIED ACOUSTICS, 2022, 191
  • [27] Sparse Bayesian Broad Learning System for Probabilistic Estimation of Prediction
    Xu, Lili
    Chen, C. L. Philip
    Han, Ruizhi
    IEEE ACCESS, 2020, 8 : 56267 - 56280
  • [28] Comparison of trend models for geotechnical spatial variability: Sparse Bayesian Learning vs. Gaussian Process Regression
    Ching, Jianye
    Yoshida, Ikumasa
    Phoon, Kok-Kwang
    GONDWANA RESEARCH, 2023, 123 : 174 - 183
  • [29] A Bayesian Lasso based sparse learning model
    Helgoy, Ingvild M.
    Li, Yushu
    COMMUNICATIONS IN STATISTICS-SIMULATION AND COMPUTATION, 2023,
  • [30] Bayesian learning of sparse gene regulatory networks
    Chan, Zeke S. H.
    Collins, Lesley
    Kasabov, N.
    BIOSYSTEMS, 2007, 87 (2-3) : 299 - 306