Learning Through Deterministic Assignment of Hidden Parameters

被引:7
作者
Fang, Jian [1 ]
Lin, Shaobo [2 ,3 ]
Xu, Zongben [1 ]
机构
[1] Xi An Jiao Tong Univ, Sch Math & Stat, Xian 710048, Peoples R China
[2] Wenzhou Univ, Dept Math, Wenzhou 325035, Peoples R China
[3] Chinese Acad Sci, Shenyang Inst Automat, State Key Lab Robot, Shenyang 110016, Peoples R China
基金
中国国家自然科学基金;
关键词
Neural networks; Neurons; Uncertainty; Optimization; Cybernetics; Supervised learning; Robots; Bright parameters; hidden parameters; learning rate; neural networks; supervised learning; NEURAL-NETWORKS; APPROXIMATION; MACHINE; ENERGY; ENTROPY; POINTS;
D O I
10.1109/TCYB.2018.2885029
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Supervised learning frequently boils down to determining hidden and bright parameters in a parameterized hypothesis space based on finite input-output samples. The hidden parameters determine the nonlinear mechanism of an estimator, while the bright parameters characterize the linear mechanism. In a traditional learning paradigm, hidden and bright parameters are not distinguished and trained simultaneously in one learning process. Such a one-stage learning (OSL) brings a benefit of theoretical analysis but suffers from the high computational burden. In this paper, we propose a two-stage learning scheme, learning through deterministic assignment of hidden parameters (LtDaHPs), suggesting to deterministically generate the hidden parameters by using minimal Riesz energy points on a sphere and equally spaced points in an interval. We theoretically show that with such a deterministic assignment of hidden parameters, LtDaHP with a neural network realization almost shares the same generalization performance with that of OSL. Then, LtDaHP provides an effective way to overcome the high computational burden of OSL. We present a series of simulations and application examples to support the outperformance of LtDaHP.
引用
收藏
页码:2321 / 2334
页数:14
相关论文
共 48 条
[1]  
[Anonymous], 2004, Kernel Methods for Pattern Anal- ysis
[2]  
[Anonymous], 1996, Neural Network Design
[3]  
[Anonymous], 2002, DISTRIBUTION FREE TH
[4]  
[Anonymous], 2004, Notices of the AMS
[5]  
[Anonymous], **DROPPED REF**
[6]   Approximation of smooth functions on compact two-point homogeneous spaces [J].
Brown, G ;
Dai, F .
JOURNAL OF FUNCTIONAL ANALYSIS, 2005, 220 (02) :401-423
[7]   A probabilistic learning algorithm for robust modeling using neural networks with random weights [J].
Cao, Feilong ;
Ye, Hailiang ;
Wang, Dianhui .
INFORMATION SCIENCES, 2015, 313 :62-78
[8]   The errors of approximation for feedforward neural networks in the Lp metric [J].
Cao, Feilong ;
Zhang, Rui .
MATHEMATICAL AND COMPUTER MODELLING, 2009, 49 (7-8) :1563-1572
[9]  
Chakrabarti D, 2017, J MACH LEARN RES, V18, P1
[10]   Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture [J].
Chen, C. L. Philip ;
Liu, Zhulin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) :10-24