On extreme learning machine for ε-insensitive regression in the primal by Newton method

被引:10
作者
Balasundaram, S. [1 ]
Kapil [1 ]
机构
[1] Jawaharlal Nehru Univ, Sch Comp & Syst Sci, New Delhi 110067, India
关键词
Extreme learning machine; Generalized Hessian matrix; Newton method; Single hidden layer feedforward neural networks; Smoothing technique; Support vector regression; SUPPORT VECTOR MACHINE; TIME-SERIES; CLASSIFICATION;
D O I
10.1007/s00521-011-0798-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, extreme learning machine (ELM) for epsilon-insensitive error loss function-based regression problem formulated in 2-norm as an unconstrained optimization problem in primal variables is proposed. Since the objective function of this unconstrained optimization problem is not twice differentiable, the popular generalized Hessian matrix and smoothing approaches are considered which lead to optimization problems whose solutions are determined using fast Newton-Armijo algorithm. The main advantage of the algorithm is that at each iteration, a system of linear equations is solved. By performing numerical experiments on a number of interesting synthetic and real-world datasets, the results of the proposed method are compared with that of ELM using additive and radial basis function hidden nodes and of support vector regression (SVR) using Gaussian kernel. Similar or better generalization performance of the proposed method on the test data in comparable computational time over ELM and SVR clearly illustrates its efficiency and applicability.
引用
收藏
页码:559 / 567
页数:9
相关论文
共 28 条
  • [1] [Anonymous], P 18 EUR S ART NEUR
  • [2] [Anonymous], 1971, Generalized Inverses of Matrices and its Applications
  • [3] [Anonymous], 2000, NATURE STAT LEARNING, DOI DOI 10.1007/978-1-4757-3264-1
  • [4] Application of error minimized extreme learning machine for simultaneous learning of a function and its derivatives
    Balasundaram, S.
    Kapil
    [J]. NEUROCOMPUTING, 2011, 74 (16) : 2511 - 2519
  • [5] On finite Newton method for support vector regression
    Balasundaram, S.
    Singh, Rampal
    [J]. NEURAL COMPUTING & APPLICATIONS, 2010, 19 (07) : 967 - 977
  • [6] NONLINEAR PREDICTION OF CHAOTIC TIME-SERIES
    CASDAGLI, M
    [J]. PHYSICA D, 1989, 35 (03): : 335 - 356
  • [7] Chang C.-C., LIBSVM: a Library for Support Vector Machines
  • [8] Seeking multi-thresholds directly from support vectors for image segmentation
    Chen, SC
    Wang, M
    [J]. NEUROCOMPUTING, 2005, 67 : 335 - 344
  • [9] Christianini N., 2000, INTRO SUPPORT VECTOR, P189
  • [10] Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning
    Feng, Guorui
    Huang, Guang-Bin
    Lin, Qingping
    Gay, Robert
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (08): : 1352 - 1357