Layer-Parallel Training of Deep Residual Neural Networks

被引:47
作者
Guenther, Stefanie [1 ]
Ruthotto, Lars [2 ]
Schroder, Jacob B. [3 ]
Cyr, Eric C. [4 ]
Gauger, Nicolas R. [1 ]
机构
[1] TU Kaiserslautern, Sci Comp Grp, D-67663 Kaiserslautern, Germany
[2] Emory Univ, Dept Math & Comp Sci, Atlanta, GA 30322 USA
[3] Univ New Mexico, Dept Math & Stat, Albuquerque, NM 87131 USA
[4] Sandia Natl Labs, Computat Math Dept, Albuquerque, NM 87185 USA
来源
SIAM JOURNAL ON MATHEMATICS OF DATA SCIENCE | 2020年 / 2卷 / 01期
基金
美国国家科学基金会;
关键词
deep learning; residual networks; supervised learning; optimal control; layer-parallelization; parallel-in-time; simultaneous optimization; MULTIGRID REDUCTION; TIME-INTEGRATION; OPTIMIZATION;
D O I
10.1137/19M1247620
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
Residual neural networks (ResNets) are a promising class of deep neural networks that have shown excellent performance for a number of learning tasks, e.g., image classification and recognition. Mathematically, ResNet architectures can be interpreted as forward Euler discretizations of a nonlinear initial value problem whose time-dependent control variables represent the weights of the neural network. Hence, training a ResNet can be cast as an optimal control problem of the associated dynamical system. For similar time-dependent optimal control problems arising in engineering applications, parallel-in-time methods have shown notable improvements in scalability. This paper demonstrates the use of those techniques for efficient and effective training of ResNets. The proposed algorithms replace the classical (sequential) forward and backward propagation through the network layers with a parallel nonlinear multigrid iteration applied to the layer domain. This adds a new dimension of parallelism across layers that is attractive when training very deep networks. From this basic idea, we derive multiple layer-parallel methods. The most efficient version employs a simultaneous optimization approach where updates to the network parameters are based on inexact gradient information in order to speed up the training process. Using numerical examples from supervised classification, we demonstrate that the new approach achieves a training performance similar to that of traditional methods, but enables layer-parallelism and thus provides speedup over layer-serial methods through greater concurrency.
引用
收藏
页码:1 / 23
页数:23
相关论文
共 53 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Abu-Mostafa Y, 2012, AMLBook
[3]  
[Anonymous], **DATA OBJECT**, DOI DOI 10.4231/R7RX991C
[4]  
[Anonymous], 2017, PREPRINT
[5]  
[Anonymous], 2012, P ADV NEUR INF PROC
[6]  
[Anonymous], 2017, COMMUN MATH STAT
[7]   Learning Deep Architectures for AI [J].
Bengio, Yoshua .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 2 (01) :1-127
[8]   Parallel Lagrange-Newton-Krylov-Schur methods for PDE-constrained optimization. Part I: The Krylov-Schur solver [J].
Biros, G ;
Ghattas, O .
SIAM JOURNAL ON SCIENTIFIC COMPUTING, 2005, 27 (02) :687-713
[9]  
Bordes A., 2014, PREPRINT
[10]  
Borzi A., 2012, Computational Science and Engineering, V8