Convergence of Unregularized Online Learning Algorithms

被引:1
|
作者
Lei, Yunwen [1 ,2 ]
Shi, Lei [3 ]
Guo, Zheng-Chu [4 ]
机构
[1] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen Key Lab Computat Intelligence, Shenzhen 518055, Peoples R China
[2] City Univ Hong Kong, Dept Math, Kowloon, Hong Kong, Peoples R China
[3] Fudan Univ, Shanghai Key Lab Contemporary Appl Math, Sch Math Sci, Shanghai 200433, Peoples R China
[4] Zhejiang Univ, Sch Math Sci, Hangzhou 310027, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Learning theory; Online learning; Convergence analysis; Reproducing kernel Hilbert space; STOCHASTIC-APPROXIMATION; REGULARIZATION; CLASSIFICATION;
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper we study the convergence of online gradient descent algorithms in reproducing kernel Hilbert spaces (RKHSs) without regularization. We establish a sufficient condition and a necessary condition for the convergence of excess generalization errors in expectation. A sufficient condition for the almost sure convergence is also given. With high probability, we provide explicit convergence rates of the excess generalization errors for both averaged iterates and the last iterate, which in turn also imply convergence rates with probability one. To our best knowledge, this is the first high-probability convergence rate for the last iterate of online gradient descent algorithms in the general convex setting. Without any boundedness assumptions on iterates, our results are derived by a novel use of two measures of the algorithm's one-step progress, respectively by generalization errors and by distances in RKHSs, where the variances of the involved martingales are cancelled out by the descent property of the algorithm.
引用
收藏
页数:33
相关论文
共 50 条
  • [1] Unregularized online learning algorithms with general loss functions
    Ying, Yiming
    Zhou, Ding-Xuan
    APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS, 2017, 42 (02) : 224 - 244
  • [2] Unregularized Online Algorithms with Varying Gaussians
    Wang, Baobin
    Hu, Ting
    CONSTRUCTIVE APPROXIMATION, 2021, 53 (03) : 403 - 440
  • [3] Unregularized Online Algorithms with Varying Gaussians
    Baobin Wang
    Ting Hu
    Constructive Approximation, 2021, 53 : 403 - 440
  • [4] Fast Convergence of Online Pairwise Learning Algorithms
    Boissier, Martin
    Lyu, Siwei
    Ying, Yiming
    Zhou, Ding-Xuan
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 51, 2016, 51 : 204 - 212
  • [5] Fast and strong convergence of online learning algorithms
    Guo, Zheng-Chu
    Shi, Lei
    ADVANCES IN COMPUTATIONAL MATHEMATICS, 2019, 45 (5-6) : 2745 - 2770
  • [6] Fast and strong convergence of online learning algorithms
    Zheng-Chu Guo
    Lei Shi
    Advances in Computational Mathematics, 2019, 45 : 2745 - 2770
  • [7] Convergence analysis of online algorithms
    Ying, Yiming
    ADVANCES IN COMPUTATIONAL MATHEMATICS, 2007, 27 (03) : 273 - 291
  • [8] Convergence analysis of online algorithms
    Yiming Ying
    Advances in Computational Mathematics, 2007, 27 : 273 - 291
  • [9] CONVERGENCE OF LEARNING ALGORITHMS
    DEVYATER.IP
    KAPLINSK.AI
    TSYPKIN, YZ
    AUTOMATION AND REMOTE CONTROL, 1969, (10) : 1619 - &
  • [10] Non-monotonic convergence of online learning algorithms for perceptrons with noisy teacher
    Ikeda, Kazushi
    Honda, Arata
    Hanzawa, Hiroaki
    Miyoshi, Seiji
    NEURAL NETWORKS, 2018, 102 : 21 - 26