Binary/ternary extreme learning machines

被引:28
作者
van Heeswijk, Mark [1 ]
Miche, Yoan [1 ]
机构
[1] Aalto Univ, Sch Sci, Dept Informat & Comp Sci, FI-00076 Aalto, Finland
关键词
Extreme learning machine; Hidden layer initialization; Intrinsic plasticity; Random projection; Binary features; Ternary features; PLASTICITY; NETWORKS; ELM;
D O I
10.1016/j.neucom.2014.01.072
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, a new hidden layer construction method for Extreme Learning Machines (ELMs) is investigated, aimed at generating a diverse set of weights. The paper proposes two new ELM variants: Binary ELM, with a weight initialization scheme based on 10.1)-weights; and Ternary ELM, with a weight initialization scheme based on I 1,0,11-weights. The motivation behind this approach is that these features will be from very different subspaces and therefore each neuron extracts more diverse information from the inputs than neurons with completely random features traditionally used in ELM. Therefore, ideally it should lead to better ELMs. Experiments show that indeed ELMs with ternary weights generally achieve lower test error. Furthermore, the experiments show that the Binary and Ternary ELMs are more robust to irrelevant and noisy variables and are in fact performing implicit variable selection. Finally, since only the weight generation scheme is adapted, the computational time of the ELM is unaffected, and the improved accuracy, added robustness and the implicit variable selection of Binary ELM and Ternary ELM come for free. (C) 2014 Elsevier B.V. All rights reserved.
引用
收藏
页码:187 / 197
页数:11
相关论文
共 25 条
  • [1] [Anonymous], 1990, Classical and modern regression with applications
  • [2] [Anonymous], 2005, Advances in Neural Information Processing Systems 17
  • [3] Asuncion Arthur, 2007, UCI machine learning repository
  • [4] The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network
    Bartlett, PL
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (02) : 525 - 536
  • [5] Bishop Christopher, 2006, Pattern Recognition and Machine Learning, DOI 10.1117/1.2819119
  • [6] Regularized Extreme Learning Machine
    Deng, Wanyu
    Zheng, Qinghua
    Chen, Lin
    [J]. 2009 IEEE SYMPOSIUM ON COMPUTATIONAL INTELLIGENCE AND DATA MINING, 2009, : 389 - 395
  • [7] Error Minimized Extreme Learning Machine With Growth of Hidden Nodes and Incremental Learning
    Feng, Guorui
    Huang, Guang-Bin
    Lin, Qingping
    Gay, Robert
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2009, 20 (08): : 1352 - 1357
  • [8] Huang GB, 2004, IEEE IJCNN, P985
  • [9] Convex incremental extreme learning machine
    Huang, Guang-Bin
    Chen, Lei
    [J]. NEUROCOMPUTING, 2007, 70 (16-18) : 3056 - 3062
  • [10] Extreme learning machine: Theory and applications
    Huang, Guang-Bin
    Zhu, Qin-Yu
    Siew, Chee-Kheong
    [J]. NEUROCOMPUTING, 2006, 70 (1-3) : 489 - 501