Using the Metropolis algorithm to explore the loss surface of a recurrent neural network

被引:0
作者
Casert, Corneel [1 ]
Whitelam, Stephen [1 ]
机构
[1] Molecular Foundry, Lawrence Berkeley National Laboratory, 1 Cyclotron Road, Berkeley, 94720, CA
关键词
45;
D O I
10.1063/5.0221223
中图分类号
学科分类号
摘要
In the limit of small trial moves the Metropolis Monte Carlo algorithm is equivalent to gradient descent on the energy function in the presence of Gaussian white noise. This observation was originally used to demonstrate a correspondence between Metropolis Monte Carlo moves of model molecules and overdamped Langevin dynamics, but it also applies in the context of training a neural network: making small random changes to the weights of a neural network, accepted with the Metropolis probability, with the loss function playing the role of energy, has the same effect as training by explicit gradient descent in the presence of Gaussian white noise. We explore this correspondence in the context of a simple recurrent neural network. We also explore regimes in which this correspondence breaks down, where the gradient of the loss function becomes very large or small. In these regimes the Metropolis algorithm can still effect training, and so can be used as a probe of the loss function of a neural network in regimes in which gradient descent struggles. We also show that training can be accelerated by making purposely-designed Monte Carlo trial moves of neural-network weights. © 2024 American Institute of Physics. All rights reserved.
引用
收藏
相关论文
共 42 条
  • [41] Helfrich K., Ye Q., Eigenvalue normalized recurrent neural networks for short term memory, Proceedings of the AAAI Conference on Artificial Intelligence 2020, 34, pp. 4115-4122
  • [42] Whitelam S., Geissler P.L., Avoiding unphysical kinetic traps in Monte Carlo simulations of strongly attractive particles, J. Chem. Phys., 127, (2007)