Long short-term memory with activation on gradient

被引:13
作者
Qin, Chuan [1 ,2 ]
Chen, Liangming [3 ,4 ]
Cai, Zangtai [2 ]
Liu, Mei [1 ,2 ]
Jin, Long [3 ]
机构
[1] Lanzhou Univ, Sch Informat Sci & Engn, Lanzhou 730000, Peoples R China
[2] Qinghai Normal Univ, State Key Lab Tibetan Intelligent Informat Proc &, Xining 810008, Peoples R China
[3] Chinese Acad Sci, Chongqing Inst Green & Intelligent Technol, Chongqing Key Lab Big Data & Intelligent Comp, Chongqing 400714, Peoples R China
[4] Univ Chinese Acad Sci, Chongqing Sch, Chongqing 400714, Peoples R China
基金
中国国家自然科学基金;
关键词
Long short -term memory (LSTM); Gradient activation; Vanishing gradient problem; Exploding gradient problem; Ill-conditioned problem; NETWORK;
D O I
10.1016/j.neunet.2023.04.026
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As the number of long short-term memory (LSTM) layers increases, vanishing/exploding gradient problems exacerbate and have a negative impact on the performance of the LSTM. In addition, the ill-conditioned problem occurs in the training process of LSTM and adversely affects its convergence. In this work, a simple and effective method of the gradient activation is applied to the LSTM, while empirical criteria for choosing gradient activation hyperparameters are found. Activating the gradient refers to modifying the gradient with a specific function named the gradient activation function. Moreover, different activation functions and different gradient operations are compared to prove that the gradient activation is effective on LSTM. Furthermore, comparative experiments are conducted, and their results show that the gradient activation alleviates the above problems and accelerates the convergence of the LSTM. The source code is publicly available at https://github.com/LongJin-lab/ACT-In-NLP. & COPY; 2023 Elsevier Ltd. All rights reserved.
引用
收藏
页码:135 / 145
页数:11
相关论文
共 46 条
  • [1] Optimization Methods for Large-Scale Machine Learning
    Bottou, Leon
    Curtis, Frank E.
    Nocedal, Jorge
    [J]. SIAM REVIEW, 2018, 60 (02) : 223 - 311
  • [2] Brust CA, 2016, Arxiv, DOI arXiv:1606.04333
  • [3] Cai Tianle, 2021, P MACHINE LEARNING R, V139
  • [4] Side channel attacks for architecture extraction of neural networks
    Chabanne, Herve
    Danger, Jean-Luc
    Guiga, Linda
    Kuhne, Ulrich
    [J]. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY, 2021, 6 (01) : 3 - 16
  • [5] Zeiler MD, 2012, Arxiv, DOI arXiv:1212.5701
  • [6] Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
  • [7] Finkel J. R., 2009, P HUM LANG TECHN 200
  • [8] Grave E, 2018, PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION (LREC 2018), P3483
  • [9] Guille-Escuret C, 2021, PR MACH LEARN RES, V130
  • [10] The condition number of a function relative to a set
    Gutman, David H.
    Pena, Javier F.
    [J]. MATHEMATICAL PROGRAMMING, 2021, 188 (01) : 255 - 294