Why Spiking Neural Networks Are Efficient: A Theorem

被引:3
作者
Beer, Michael [1 ]
Urenda, Julio [2 ]
Kosheleva, Olga [2 ]
Kreinovich, Vladik [2 ]
机构
[1] Leibniz Univ Hannover, D-30167 Hannover, Germany
[2] Univ Texas El Paso, El Paso, TX 79968 USA
来源
INFORMATION PROCESSING AND MANAGEMENT OF UNCERTAINTY IN KNOWLEDGE-BASED SYSTEMS, IPMU 2020, PT I | 2020年 / 1237卷
基金
美国国家科学基金会;
关键词
Spiking neural networks; Shift-invariance; Scale-invariance;
D O I
10.1007/978-3-030-50146-4_5
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Current artificial neural networks are very successful in many machine learning applications, but in some cases they still lag behind human abilities. To improve their performance, a natural idea is to simulate features of biological neurons which are not yet implemented in machine learning. One of such features is the fact that in biological neural networks, signals are represented by a train of spikes. Researchers have tried adding this spikiness to machine learning and indeed got very good results, especially when processing time series (and, more generally, spatio-temporal data). In this paper, we provide a possible theoretical explanation for this empirical success.
引用
收藏
页码:59 / 69
页数:11
相关论文
共 7 条
[1]  
Addison P.S., 2016, The Illustrated Wavelet Transform Handbook: Introductory Theory and Applications in Science, Engineering, Medicine and Finance, V2nd ed., DOI DOI 10.1201/9781315372556
[2]  
Bishop CM., 2006, Pattern Recognition and Machine Learning
[3]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[4]  
Kasabov N.K., 2019, Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence, DOI [10.1007/978-3-662-57715-8, DOI 10.1007/978-3-662-57715-8]
[5]  
Nguyen H.T., 1997, Applications of Continuous Mathematics to Computer Science
[6]  
Reed S.K., 2010, Cognition: Theories and application, V8th
[7]  
Weihrauch K., 2012, Computable Analysis: An Introduction, DOI DOI 10.1007/978-3-642-56999-9