Unregularized Online Algorithms with Varying Gaussians

被引:2
|
作者
Wang, Baobin [1 ]
Hu, Ting [2 ]
机构
[1] South Cent Univ Nationalities, Sch Math & Stat, Wuhan 430074, Peoples R China
[2] Wuhan Univ, Sch Math & Stat, Wuhan 430072, Peoples R China
基金
中国国家自然科学基金;
关键词
Online learning; Varying Gaussian kernels; Reproducing kernel Hilbert spaces; Geometric noise condition; Learning rate;
D O I
10.1007/s00365-021-09536-3
中图分类号
O1 [数学];
学科分类号
0701 ; 070101 ;
摘要
Gaussians are a family of Mercer kernels which are widely used in machine learning and statistics. The variance of a Gaussian kernel reflexes the specific structure of the reproducing kernel Hilbert spaces (RKHS) induced by the Gaussian or other important features of learning problems such as the frequency of function components. As the variance of the Gaussian decreases, the learning performance and approximation ability will improve. This paper introduces the unregularized online algorithm with decreasing Gaussians where no regularization term is imposed and the samples are presented in sequence. With the appropriate step sizes, concrete learning rates are derived under the smoothness assumptions on the target function, for which are used to bound the approximation error. Additionally, a new type of the geometric noise condition is proposed to estimate the approximation error instead of any smoothness assumption. It is more general than the work in Steinwart et al. (Ann Stat 35(2):575-607, 2007), for which is only suitable for the hinge loss. An essential estimate is to bound the difference of the approximation functions generated by varying Gaussian RKHS. Fourier transform plays a crucial role in our analysis.
引用
收藏
页码:403 / 440
页数:38
相关论文
共 50 条
  • [11] Differentially Private Algorithms for Learning Mixtures of Separated Gaussians
    Kamath, Gautam
    Sheffet, Or
    Singhal, Vikrant
    Ullman, Jonathan
    2020 INFORMATION THEORY AND APPLICATIONS WORKSHOP (ITA), 2020,
  • [12] Constrained monotone EM algorithms for finite mixture of multivariate Gaussians
    Ingrassia, Salvatore
    Rocci, Roberto
    COMPUTATIONAL STATISTICS & DATA ANALYSIS, 2007, 51 (11) : 5339 - 5351
  • [13] Online Neural Path Guiding with Normalized Anisotropic Spherical Gaussians
    Huang, Jiawei
    Iizuka, Akito
    Tanaka, Hajime
    Komura, Taku
    Kitamura, Yoshifumi
    ACM TRANSACTIONS ON GRAPHICS, 2024, 43 (03):
  • [14] Online algorithms for ambulance routing in disaster response with time-varying victim conditions
    Shiri, Davood
    Akbari, Vahid
    Salman, F. Sibel
    OR SPECTRUM, 2024, 46 (03) : 785 - 819
  • [15] ONLINE TIME-VARYING TOPOLOGY IDENTIFICATION VIA PREDICTION-CORRECTION ALGORITHMS
    Natali, Alberto
    Coutino, Mario
    Isufi, Elvin
    Leus, Geert
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 5400 - 5404
  • [16] Distributed Adaptive Subgradient Algorithms for Online Learning Over Time-Varying Networks
    Zhang, Mingchuan
    Hao, Bowei
    Ge, Quanbo
    Zhu, Junlong
    Zheng, Ruijuan
    Wu, Qingtao
    IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2022, 52 (07): : 4518 - 4529
  • [17] Differentially Private Distributed Online Algorithms Over Time-Varying Directed Networks
    Zhu, Junlong
    Xu, Changqiao
    Guan, Jianfeng
    Wu, Dapeng Oliver
    IEEE TRANSACTIONS ON SIGNAL AND INFORMATION PROCESSING OVER NETWORKS, 2018, 4 (01): : 4 - 17
  • [18] Private and polynomial time algorithms for learning Gaussians and beyond: Extended Abstract
    Ashtiani, Hassan
    Liaw, Christopher
    CONFERENCE ON LEARNING THEORY, VOL 178, 2022, 178
  • [19] Online algorithms
    Albers, S
    Leonardi, S
    ACM COMPUTING SURVEYS, 1999, 31 : B1 - B7
  • [20] Distributed Online Learning Algorithms for Aggregative Games Over Time-Varying Unbalanced Digraphs
    Zuo, Xiaolong
    Deng, Zhenhua
    2023 62ND IEEE CONFERENCE ON DECISION AND CONTROL, CDC, 2023, : 2278 - 2283