Fast, Simple and Accurate Handwritten Digit Classification by Training Shallow Neural Network Classifiers with the 'Extreme Learning Machine' Algorithm

被引:60
作者
McDonnell, Mark D. [1 ]
Tissera, Migel D. [1 ]
Vladusich, Tony [1 ]
van Schaik, Andre [2 ]
Tapson, Jonathan [2 ]
机构
[1] Univ S Australia, Inst Telecommun Res, Sch Informat Technol & Math Sci, Computat & Theoret Neurosci Lab, Mawson Lakes, SA 5095, Australia
[2] Univ Western Sydney, MARCS Inst, Biomed Engn & Neurosci Grp, Penrith, NSW 2751, Australia
基金
澳大利亚研究理事会;
关键词
REGRESSION; ONLINE; DEEP; BIG;
D O I
10.1371/journal.pone.0134254
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recent advances in training deep (multi-layer) architectures have inspired a renaissance in neural network use. For example, deep convolutional networks are becoming the default option for difficult tasks on large datasets, such as image and speech recognition. However, here we show that error rates below 1% on the MNIST handwritten digit benchmark can be replicated with shallow non-convolutional neural networks. This is achieved by training such networks using the 'Extreme Learning Machine' (ELM) approach, which also enables a very rapid training time (similar to 10 minutes). Adding distortions, as is common practise for MNIST, reduces error rates even further. Our methods are also shown to be capable of achieving less than 5.5% error rates on the NORB image database. To achieve these results, we introduce several enhancements to the standard ELM algorithm, which individually and in combination can significantly improve performance. The main innovation is to ensure each hidden-unit operates only on a randomly sized and positioned patch of each image. This form of random 'receptive field' sampling of the input ensures the input weight matrix is sparse, with about 90% of weights equal to zero. Furthermore, combining our methods with a small number of iterations of a single-batch backpropagation method can significantly reduce the number of hidden-units required to achieve a particular performance. Our close to state-of-the-art results for MNIST and NORB suggest that the ease of use and accuracy of the ELM algorithm for designing a single-hidden-layer neural network classifier should cause it to be given greater consideration either as a standalone method for simpler problems, or as the final classification stage in deep neural networks applied to more difficult problems.
引用
收藏
页数:20
相关论文
共 37 条
[1]  
[Anonymous], MNIST DATABASE HANDW
[2]  
[Anonymous], 2015, COMPUTER SCI
[3]   A rapid supervised learning neural network for function interpolation and approximation [J].
Chen, CLP .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (05) :1220-1230
[4]  
Ciresan D.C., 2011, P 22 INT JOINT C ART
[5]  
Ciresan D, 2012, PROC CVPR IEEE, P3642, DOI 10.1109/CVPR.2012.6248110
[6]   Deep, Big, Simple Neural Nets for Handwritten Digit Recognition [J].
Ciresan, Dan Claudiu ;
Meier, Ueli ;
Gambardella, Luca Maria ;
Schmidhuber, Juergen .
NEURAL COMPUTATION, 2010, 22 (12) :3207-3220
[7]  
Coates A., 2011, JMLR W CP, V15
[8]   Developing and applying a toolkit from a general neurocomputational framework [J].
Eliasmith, C ;
Anderson, CH .
NEUROCOMPUTING, 1999, 26-7 :1013-1018
[9]  
Eliasmith C., 2003, Neural engineering: Computation, representation, and dynamics in neurobiological systems
[10]   A Large-Scale Model of the Functioning Brain [J].
Eliasmith, Chris ;
Stewart, Terrence C. ;
Choo, Xuan ;
Bekolay, Trevor ;
DeWolf, Travis ;
Tang, Charlie ;
Rasmussen, Daniel .
SCIENCE, 2012, 338 (6111) :1202-1205