Scalable Kernel-based Learning via Low-rank Approximation of Lifted Data

被引:0
|
作者
Sheikholeslami, Fatemeh [1 ]
Giannakis, Georgios B.
机构
[1] Univ Minnesota, Dept ECE, Minneapolis, MN 55455 USA
来源
2017 55TH ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON) | 2017年
关键词
DESCENT; CONVERGENCE; MATRIX;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Despite their well-documented capability in modeling nonlinear functions, kernel methods fall short in large-scale learning tasks due to their excess memory and computational requirements. The present work introduces a novel kernel approximation approach from a dimensionality reduction point of view on virtual lifted data. The proposed framework accommodates feature extraction while considering limited storage and computational availability, and subsequently provides kernel approximation by a linear inner-product over the extracted features. Probabilistic guarantees on the generalization of the proposed task is provided, and efficient solvers with provable convergence guarantees are developed. By introducing a sampling step which precedes the dimensionality reduction task, the framework is further broadened to accommodate learning over large datasets. The connection between the novel method and Nystrom kernel approximation algorithm with its modifications is also presented. Empirical tests validate the effectiveness of the proposed approach.
引用
收藏
页码:596 / 603
页数:8
相关论文
共 50 条
  • [1] Kernel-based Low-rank Feature Extraction on a Budget for Big Data Streams
    Sheikholeslami, Fatemeh
    Berberidis, Dimitris
    Giannakis, Georgios B.
    2015 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2015, : 928 - 932
  • [2] Large-Scale Kernel-Based Feature Extraction via Low-Rank Subspace Tracking on a Budget
    Sheikholeslami, Fatemeh
    Berberidis, Dimitris
    Giannakis, Georgios B.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2018, 66 (08) : 1967 - 1981
  • [3] Scalable Nonparametric Low-Rank Kernel Learning Using Block Coordinate Descent
    Hu, En-Liang
    Kwok, James T.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2015, 26 (09) : 1927 - 1938
  • [4] Is Input Sparsity Time Possible for Kernel Low-Rank Approximation?
    Musco, Cameron
    Woodruff, David P.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [5] Sparse semi-supervised learning on low-rank kernel
    Zhang, Kai
    Wang, Qiaojun
    Lan, Liang
    Sun, Yu
    Marsic, Ivan
    NEUROCOMPUTING, 2014, 129 : 265 - 272
  • [6] Model Reduction of Markov Chains via Low-Rank Approximation
    Deng, Kun
    Huang, Dayu
    2012 AMERICAN CONTROL CONFERENCE (ACC), 2012, : 2651 - 2656
  • [7] Low-rank decomposition meets kernel learning: A generalized Nystrom method
    Lan, Liang
    Zhang, Kai
    Ge, Hancheng
    Cheng, Wei
    Liu, Jun
    Rauber, Andreas
    Li, Xiao-Li
    Wang, Jun
    Zha, Hongyuan
    ARTIFICIAL INTELLIGENCE, 2017, 250 : 1 - 15
  • [8] lp-norm multiple kernel learning with low-rank kernels
    Rakotomamonjy, Alain
    Chanda, Sukalpa
    NEUROCOMPUTING, 2014, 143 : 68 - 79
  • [9] Learning Markov Models Via Low-Rank Optimization
    Zhu, Ziwei
    Li, Xudong
    Wang, Mengdi
    Zhang, Anru
    OPERATIONS RESEARCH, 2022, 70 (04) : 2384 - 2398
  • [10] Low-rank approximation-based bidirectional linear discriminant analysis for image data
    Chen, Xiuhong
    Chen, Tong
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (07) : 19369 - 19389