A Modified Stochastic Gradient Descent Optimization Algorithm With Random Learning Rate for Machine Learning and Deep Learning

被引:0
|
作者
Duk-Sun Shim
Joseph Shim
机构
[1] Chung-Ang University,School of Electrical and Electronics Engineering
[2] Seoul National University,Graduate School of Data Science
来源
International Journal of Control, Automation and Systems | 2023年 / 21卷
关键词
Deep learning; machine learning; modified stochastic gradient descent; random learning rate; steepest descent algorithm;
D O I
暂无
中图分类号
学科分类号
摘要
An optimization algorithm is essential for minimizing loss (or objective) functions in machine learning and deep learning. Optimization algorithms face several challenges, one among which is to determine an appropriate learning rate. Generally, a low learning rate leads to slow convergence whereas a large learning rate causes the loss function to fluctuate around the minimum. As a hyper-parameter, the learning rate is determined in advance before parameter training, which is time-consuming. This paper proposes a modified stochastic gradient descent (mSGD) algorithm that uses a random learning rate. Random numbers are generated for a learning rate at every iteration, and the one that gives the minimum value of the loss function is chosen. The proposed mSGD algorithm can reduce the time required for determining the learning rate. In fact, the k-point mSGD algorithm can be considered as a kind of steepest descent algorithm. In a real experiment using the MNIST dataset of hand-written digits, it is demonstrated that the convergence performance of mSGD algorithm is much better than that of the SGD algorithm and slightly better than that of the AdaGrad and Adam algorithms.
引用
收藏
页码:3825 / 3831
页数:6
相关论文
共 50 条
  • [1] A Modified Stochastic Gradient Descent Optimization Algorithm With Random Learning Rate for Machine Learning and Deep Learning
    Shim, Duk-Sun
    Shim, Joseph
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2023, 21 (11) : 3825 - 3831
  • [2] Spawning Gradient Descent (SpGD): A Novel Optimization Framework for Machine Learning and Deep Learning
    Moeinoddin Sheikhottayefe
    Zahra Esmaily
    Fereshte Dehghani
    SN Computer Science, 6 (3)
  • [3] Improvement of SPGD by Gradient Descent Optimization Algorithm in Deep Learning
    Zhao, Qingsong
    Hao, Shiqi
    Wang, Yong
    Wang, Lei
    Lin, Zhi
    2022 ASIA COMMUNICATIONS AND PHOTONICS CONFERENCE, ACP, 2022, : 469 - 472
  • [4] Stochastic Gradient Descent and Its Variants in Machine Learning
    Netrapalli, Praneeth
    JOURNAL OF THE INDIAN INSTITUTE OF SCIENCE, 2019, 99 (02) : 201 - 213
  • [5] Stochastic Gradient Descent and Its Variants in Machine Learning
    Praneeth Netrapalli
    Journal of the Indian Institute of Science, 2019, 99 : 201 - 213
  • [6] Recent Advances in Stochastic Gradient Descent in Deep Learning
    Tian, Yingjie
    Zhang, Yuqi
    Zhang, Haibin
    MATHEMATICS, 2023, 11 (03)
  • [7] Stochastic Gradient Descent with Polyak's Learning Rate
    Prazeres, Mariana
    Oberman, Adam M.
    JOURNAL OF SCIENTIFIC COMPUTING, 2021, 89 (01)
  • [8] Deep learning for sea cucumber detection using stochastic gradient descent algorithm
    Zhang, Huaqiang
    Yu, Fusheng
    Sun, Jincheng
    Shen, Xiaoqin
    Li, Kun
    EUROPEAN JOURNAL OF REMOTE SENSING, 2020, 53 (53-62) : 53 - 62
  • [9] Stochastic Gradient Descent with Polyak’s Learning Rate
    Mariana Prazeres
    Adam M. Oberman
    Journal of Scientific Computing, 2021, 89
  • [10] An efficient, distributed stochastic gradient descent algorithm for deep-learning applications
    Cong, Guojing
    Bhardwaj, Onkar
    Feng, Minwei
    2017 46TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING (ICPP), 2017, : 11 - 20