Deterministic Gradient-Descent Learning of Linear Regressions: Adaptive Algorithms, Convergence Analysis and Noise Compensation

被引:0
作者
Liu, Kang-Zhi [1 ]
Gan, Chao [2 ]
机构
[1] Chiba Univ, Dept Elect & Elect Engn, Chiba 2638522, Japan
[2] China Univ Geosci, Sch Automat, Wuhan 430074, Peoples R China
关键词
Linear regression; gradient descent; adaptive learning rate; weight convergence; noise compensation; LMS;
D O I
10.1109/TPAMI.2024.3399312
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Weight learning forms a basis for the machine learning and numerous algorithms have been adopted up to date. Most of the algorithms were either developed in the stochastic framework or aimed at minimization of loss or regret functions. Asymptotic convergence of weight learning, vital for good output prediction, was seldom guaranteed for online applications. Since linear regression is the most fundamental component in machine learning, we focus on this model in this paper. Aiming at online applications, a deterministic analysis method is developed based on LaSalle's invariance principle. Convergence conditions are derived for both the first-order and the second-order learning algorithms, without resorting to any stochastic argument. Moreover, the deterministic approach makes it easy to analyze the noise influence. Specifically, adaptive hyperparameters are derived in this framework and their tuning rules disclosed for the compensation of measurement noise. Comparison with four most popular algorithms validates that this approach has a higher learning capability and is quite promising in enhancing the weight learning performance.
引用
收藏
页码:7867 / 7877
页数:11
相关论文
共 30 条
  • [1] Aggarwal C, 2020, Linear Algebra and Optimization for Machine Learning, DOI DOI 10.1007/978-3-030-40344-72
  • [2] Aggarwal C., 2018, Neural Networks and Deep Learning, V1st
  • [3] [Anonymous], 2019, P INT C LEARN REPR
  • [4] [Anonymous], 1937, Bull. Int. Acad. Plonaise Sci. Lett. Classe Sci. Math. Nat. S r. A Sci. Math
  • [5] Bach F., 2011, P ANN C NEUR INF PRO, P1
  • [6] Bach F, 2024, Learning Theory From First Principles
  • [7] ANALYSIS OF THE NORMALIZED LMS ALGORITHM WITH GAUSSIAN INPUTS
    BERSHAD, NJ
    [J]. IEEE TRANSACTIONS ON ACOUSTICS SPEECH AND SIGNAL PROCESSING, 1986, 34 (04): : 793 - 806
  • [8] Zeiler MD, 2012, Arxiv, DOI arXiv:1212.5701
  • [9] Duchi J, 2011, J MACH LEARN RES, V12, P2121
  • [10] A gastrointestinal nematode in pregnant and lactating mice alters maternal and neonatal microbiomes
    Haque, Manjurul
    Koski, Kristine G.
    Scott, Marilyn E.
    [J]. INTERNATIONAL JOURNAL FOR PARASITOLOGY, 2021, 51 (11) : 945 - 957