Fault and Noise Tolerance in the Incremental Extreme Learning Machine

被引:16
作者
Leung, Ho Chun [1 ]
Leung, Chi Sing [1 ]
Wong, Eric Wing Ming [1 ]
机构
[1] City Univ Hong Kong, Dept Elect Engn, Hong Kong, Peoples R China
来源
IEEE ACCESS | 2019年 / 7卷
关键词
Single hidden layer network; incremental learning; extreme learning machine; multiplicative noise; open fault; NEURAL-NETWORKS; FEEDFORWARD NETWORKS; ERROR ANALYSIS; DESIGN;
D O I
10.1109/ACCESS.2019.2948059
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The extreme learning machine (ELM) is an efficient way to build single-hidden-layer feedforward networks (SLFNs). However, its fault tolerant ability is very weak. When node noise or node failure exist in a network trained by the ELM concept, the performance of the network is greatly degraded if a countermeasure is not taken. However, this kind of countermeasure for the ELM or incremental learning is seldom reported. This paper considers the situation that a trained SLFN suffers from the coexistence of node fault and node noise. We develop two fault tolerant incremental ELM algorithms for the regression problem, namely node fault tolerant incremental ELM (NFTI-ELM) and node fault tolerant convex incremental ELM (NFTCI-ELM). The NFTI-ELM determines the output weight of the newly inserted node only. We prove that in terms of the training set mean squared error (MSE) of faulty SLFNs, the NFTI-ELM converges. Our numerical results show that the NFTI-ELM is superior to the conventional ELM and incremental ELM algorithms under faulty situations. To further improve the performance, we propose the NFTCI-ELM algorithm. It not only determines the output weight of the newly inserted node, but also updates all previously trained output weights. In terms of training set MSE of faulty SLFNs, the NFTCI-ELM converges, and it is superior to the NFTI-ELM.
引用
收藏
页码:155171 / 155183
页数:13
相关论文
共 50 条
  • [31] Synthetic Aperture Radar Target identification Based on Incremental Kernel Extreme Learning Machine
    Guo Chen-long
    Zhou Hongyi
    TENTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2018), 2018, 10806
  • [32] Hessian unsupervised extreme learning machine
    Dass, Sharana Dharshikgan Suresh
    Krishnasamy, Ganesh
    Paramesran, Raveendran
    Phan, Raphael. C. -W.
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (05) : 2013 - 2022
  • [33] Learning local discriminative representations via extreme learning machine for machine fault diagnosis
    Li, Yue
    Zeng, Yijie
    Qing, Yuanyuan
    Huang, Guang-Bin
    NEUROCOMPUTING, 2020, 409 (409) : 275 - 285
  • [34] Fault Diagnosis of Tennessee-Eastman Process Using Orthogonal Incremental Extreme Learning Machine Based on Driving Amount
    Zou, Weidong
    Xia, Yuanqing
    Li, Huifang
    IEEE TRANSACTIONS ON CYBERNETICS, 2018, 48 (12) : 3403 - 3410
  • [35] Integrated Optimization Method of Hidden Parameters in Incremental Extreme Learning Machine
    Zhang, Siyuan
    Xie, Linbo
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [36] L1-PLS Based on Incremental Extreme Learning Machine
    Sun, Zhiying
    Zhou, Jinglin
    PROCEEDINGS OF 2020 IEEE 9TH DATA DRIVEN CONTROL AND LEARNING SYSTEMS CONFERENCE (DDCLS'20), 2020, : 947 - 952
  • [37] A fast incremental extreme learning machine algorithm for data streams classification
    Xu, Shuliang
    Wang, Junhong
    EXPERT SYSTEMS WITH APPLICATIONS, 2016, 65 : 332 - 344
  • [38] A novel visual tracking system with adaptive incremental extreme learning machine
    Wang, Zhihui
    Yoon, Sook
    Park, Dong Sun
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2017, 11 (01): : 451 - 465
  • [39] Adaptive Incremental Ensemble of Extreme Learning Machines for Fault Diagnosis in Induction Motors
    Razavi-Far, Roozbeh
    Saif, Mehrdad
    Palade, Vasile
    Zio, Enrico
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1615 - 1622
  • [40] Surface reconstruction based on extreme learning machine
    Zhou, Zheng Hua
    Zhao, Jian Wei
    Cao, Fei Long
    NEURAL COMPUTING & APPLICATIONS, 2013, 23 (02) : 283 - 292