Weight Agnostic Neural Networks

被引:0
作者
Gaier, Adam [1 ]
Ha, David [2 ]
机构
[1] Univ Lorraine, CNRS, INRIA, Bonn Rhein Sieg Univ Appl Sci, Metz, France
[2] Google Brain, Tokyo, Japan
来源
ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019) | 2019年 / 32卷
关键词
ALGORITHM; SYNAPTOGENESIS; RECOGNITION; TOPOLOGY; SYSTEMS; CORTEX;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Not all neural network architectures are created equal, some perform much better than others for certain tasks. But how important are the weight parameters of a neural network compared to its architecture? In this work, we question to what extent neural network architectures alone, without learning any weight parameters, can encode solutions for a given task. We propose a search method for neural network architectures that can already perform a task without any explicit weight training. To evaluate these networks, we populate the connections with a single shared weight parameter sampled from a uniform random distribution, and measure the expected performance. We demonstrate that our method can find minimal neural network architectures that can perform several reinforcement learning tasks without weight training. On a supervised learning domain, we find network architectures that achieve much higher than chance accuracy on MNIST using random weights.
引用
收藏
页数:15
相关论文
共 124 条
[1]  
andWermter S., 2018, CoRR
[2]   AN EVOLUTIONARY ALGORITHM THAT CONSTRUCTS RECURRENT NEURAL NETWORKS [J].
ANGELINE, PJ ;
SAUNDERS, GM ;
POLLACK, JB .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1994, 5 (01) :54-65
[3]  
[Anonymous], INTRO RESERVOIR COMP
[4]  
[Anonymous], 2018, ADV NEURAL INFORM PR
[5]  
[Anonymous], DELTA GANN NEW APPRO
[6]  
[Anonymous], 2019, INT C LEARN REPR ICL
[7]  
[Anonymous], 1990, Advances in neural information processing systems
[8]  
[Anonymous], 2017, INT C LEARNING REPRE
[9]  
[Anonymous], 2017, CATEGORICAL REPARAME
[10]  
[Anonymous], ASS ADV ARTIFICIAL I