Deep Layer-wise Networks Have Closed-Form Weights

被引:0
作者
Wu, Chieh [1 ]
Masoomi, Aria [1 ]
Gretton, Arthur [2 ]
Dy, Jennifer [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] UCL, London, England
来源
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151 | 2022年 / 151卷
关键词
APPROXIMATION; BACKPROPAGATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network one layer at a time with only a "single forward pass" has been proposed as an alternative to bypass BP; we refer to these networks as "layer-wise" networks. We continue the work on layer-wise networks by answering two outstanding questions. First, do they have a closed-form solution? Second, how do we know when to stop adding more layers? This work proves that the Kernel Mean Embedding is the closed-form weight that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the Neural Indicator Kernel.
引用
收藏
页码:188 / 225
页数:38
相关论文
共 68 条
[1]  
Allen-Zhu Z, 2019, ADV NEUR IN, V32
[2]  
[Anonymous], 2012, Bayesian learning for neural networks
[3]  
[Anonymous], 2017, MAKA89 NEURAL TANGEN
[4]  
[Anonymous], 2014, ADV NEURAL INFORM PR
[5]  
Ansuini A, 2019, ADV NEUR IN, V32
[6]  
Arora S, 2019, Arxiv, DOI arXiv:1910.01663
[7]  
Arora Sanjeev, 2019, Advances in Neural Information Processing Systems, V32
[8]  
Belilovsky E, 2019, PR MACH LEARN RES, V97
[9]  
Bengio Y, 2016, Arxiv, DOI arXiv:1502.04156
[10]   STDP-Compatible Approximation of Backpropagation in an Energy-Based Model [J].
Bengio, Yoshua ;
Mesnard, Thomas ;
Fischer, Asja ;
Zhang, Saizheng ;
Wu, Yuhuai .
NEURAL COMPUTATION, 2017, 29 (03) :555-577