Deep Layer-wise Networks Have Closed-Form Weights

被引:0
作者
Wu, Chieh [1 ]
Masoomi, Aria [1 ]
Gretton, Arthur [2 ]
Dy, Jennifer [1 ]
机构
[1] Northeastern Univ, Boston, MA 02115 USA
[2] UCL, London, England
来源
INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151 | 2022年 / 151卷
关键词
APPROXIMATION; BACKPROPAGATION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP). To better mimic the brain, training a network one layer at a time with only a "single forward pass" has been proposed as an alternative to bypass BP; we refer to these networks as "layer-wise" networks. We continue the work on layer-wise networks by answering two outstanding questions. First, do they have a closed-form solution? Second, how do we know when to stop adding more layers? This work proves that the Kernel Mean Embedding is the closed-form weight that achieves the network global optimum while driving these networks to converge towards a highly desirable kernel for classification; we call it the Neural Indicator Kernel.
引用
收藏
页码:188 / 225
页数:38
相关论文
共 68 条
[41]  
Lillicrap T.P., 2007, NAT REV NEUROSCI, V656
[42]  
Lindsey J, 2020, Arxiv, DOI arXiv:2006.09549
[43]  
Löwe S, 2019, ADV NEUR IN, V32
[44]  
Ma Wan-Duo Kurt, 2019, AAAI
[45]  
Matthews A., 2018, INT C LEARNING REPRE
[46]  
McLachlan G. J., 2004, DISCRIMINANT ANAL ST
[47]  
Montavon G, 2011, J MACH LEARN RES, V12, P2563
[48]  
Muandet K, 2020, Arxiv, DOI arXiv:1605.09522
[49]  
Niu D., 2010, ICML, P831, DOI DOI 10.5555/3104322.3104428
[50]  
Nokland A., 2019, P 36 INT C MACHINE L, P4839