Statistical inference: learning in artificial neural networks

被引:15
作者
Yang, HH
Murata, N
Amari, S
机构
[1] Oregon Grad Inst, Dept Comp Sci, Portland, OR 97291 USA
[2] RIKEN, BSI, Lab Informat Synth, Wako, Saitama 35101, Japan
关键词
D O I
10.1016/S1364-6613(97)01114-5
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
Artificial neural networks (ANNs) are widely used to model low-level neural activities and high-level cognitive functions. In this article, we review the application of statistical inference for learning in ANNs. Statistical inference provides an objective way to derive learning algorithms both for training and for evaluation of the performance of trained ANNs. Solutions to the over-fitting problem by model- selection methods, based on either conventional statistical approaches or on a Bayesian approach, are discussed. The use of supervised and unsupervised learning algorithms for ANNs are reviewed. Training a multilayer ANN by supervised learning is equivalent to nonlinear regression. The ensemble methods, bagging and arching, described here, can be applied to combine ANNs to form a new predictor with improved performance. Unsupervised learning algorithms that are derived either by the Hebbian law for bottom-up self-organization, or by global objective functions for top-down self-organization are also discussed.
引用
收藏
页码:4 / 10
页数:7
相关论文
共 50 条
[31]   Trigonometric Inference Providing Learning in Deep Neural Networks [J].
Cai, Jingyong ;
Takemoto, Masashi ;
Qiu, Yuming ;
Nakajo, Hironori .
APPLIED SCIENCES-BASEL, 2021, 11 (15)
[32]   Statistical mechanics of EKF learning in neural networks [J].
Schottky, B ;
Saad, D .
JOURNAL OF PHYSICS A-MATHEMATICAL AND GENERAL, 1999, 32 (09) :1605-1621
[33]   AN UNSUPERVISED LEARNING TECHNIQUE FOR ARTIFICIAL NEURAL NETWORKS [J].
ATIYA, AF .
NEURAL NETWORKS, 1990, 3 (06) :707-711
[34]   INTRODUCTION TO COMPUTATION AND LEARNING IN ARTIFICIAL NEURAL NETWORKS [J].
MASSON, E ;
WANG, YJ .
EUROPEAN JOURNAL OF OPERATIONAL RESEARCH, 1990, 47 (01) :1-28
[35]   Variational Learning for Quantum Artificial Neural Networks [J].
Tacchino, Francesco ;
Mangini, Stefano ;
Barkoutsos, Panagiotis K. L. ;
Macchiavello, Chiara ;
Gerace, Dario ;
Tavernelli, Ivano ;
Bajoni, Daniele .
IEEE TRANSACTIONS ON QUANTUM ENGINEERING, 2021, 2
[36]   Variational Learning for Quantum Artificial Neural Networks [J].
Tacchino, Francesco ;
Mangini, Stefano ;
Barkoutsos, Panagiotis Kl. ;
MacChiavello, Chiara ;
Gerace, Dario ;
Tavernelli, Ivano ;
Bajoni, Daniele .
IEEE Transactions on Quantum Engineering, 2021, 2
[37]   HOMEOSTATIC LEARNING RULE FOR ARTIFICIAL NEURAL NETWORKS [J].
Ruzek, M. .
NEURAL NETWORK WORLD, 2018, 28 (02) :179-189
[38]   The Application of Artificial Neural Networks in Learning Analytics [J].
Jamila, Mustafina ;
Lenar, Galiullin ;
Rustam, Valiev ;
Mahyoub, Mohammed .
2020 13TH INTERNATIONAL CONFERENCE ON DEVELOPMENTS IN ESYSTEMS ENGINEERING (DESE 2020), 2020, :384-389
[39]   Inductive learning inability of artificial neural networks [J].
Bhavsar, VC ;
Ghorbani, AA ;
Goldfarb, L .
2000 CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING, CONFERENCE PROCEEDINGS, VOLS 1 AND 2: NAVIGATING TO A NEW ERA, 2000, :712-716
[40]   Meta learning evolutionary artificial neural networks [J].
Abraham, A .
NEUROCOMPUTING, 2004, 56 (1-4) :1-38