Randomized Sparse Neural Galerkin Schemes for Solving Evolution Equations with Deep Networks

被引:0
作者
Berman, Jules [1 ]
Peherstorfer, Benjamin [1 ]
机构
[1] New York Univ, Courant Inst Math Sci, New York, NY 10012 USA
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023) | 2023年
基金
美国国家科学基金会;
关键词
MODEL-REDUCTION; TIME; APPROXIMATION; DYNAMICS;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Training neural networks sequentially in time to approximate solution fields of time-dependent partial differential equations can be beneficial for preserving causality and other physics properties; however, the sequential-in-time training is numerically challenging because training errors quickly accumulate and amplify over time. This work introduces Neural Galerkin schemes that update randomized sparse subsets of network parameters at each time step. The randomization avoids overfitting locally in time and so helps prevent the error from accumulating quickly over the sequential-in-time training, which is motivated by dropout that addresses a similar issue of overfitting due to neuron co-adaptation. The sparsity of the update reduces the computational costs of training without losing expressiveness because many of the network parameters are redundant locally at each time step. In numerical experiments with a wide range of evolution equations, the proposed scheme with randomized sparse updates is up to two orders of magnitude more accurate at a fixed computational budget and up to two orders of magnitude faster at a fixed accuracy than schemes with dense updates.
引用
收藏
页数:18
相关论文
共 54 条
  • [1] EVOLUTION OF NONLINEAR REDUCED-ORDER SOLUTIONS FOR PDEs WITH CONSERVED QUANTITIES
    Anderson, William
    Farazmand, Mohammad
    [J]. SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2022, 44 (01) : A176 - A197
  • [2] Learning data-driven discretizations for partial differential equations
    Bar-Sinai, Yohai
    Hoyer, Stephan
    Hickey, Jason
    Brenner, Michael P.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2019, 116 (31) : 15344 - 15349
  • [3] Barone AVM, 2017, arXiv
  • [4] Ben Zaken Elad, 2022, ARXIV
  • [5] A unified deep artificial neural network approach to partial differential equations in complex geometries
    Berg, Jens
    Nystrom, Kaj
    [J]. NEUROCOMPUTING, 2018, 317 : 28 - 41
  • [6] Boulle N., 2020, Adv. Neural Inf. Process. Syst., V33, P14243
  • [7] Towards the cold atom analog false vacuum
    Braden, Jonathan
    Johnson, Matthew C.
    Peiris, Hiranya V.
    Weinfurtner, Silke
    [J]. JOURNAL OF HIGH ENERGY PHYSICS, 2018, (07):
  • [8] Bruna J., 2023, J COMPUTATIONAL PHYS
  • [9] Model reduction for the material point method via an implicit neural representation of the deformation map
    Chen, Peter Yichen
    Chiaramonte, Maurizio M.
    Grinspun, Eitan
    Carlberg, Kevin
    [J]. JOURNAL OF COMPUTATIONAL PHYSICS, 2023, 478
  • [10] Cogswell M., 2016, PROC INT C LEARN REP