When Broad Learning System Meets Label Noise Learning: A Reweighting Learning Framework

被引:8
作者
Liu, Licheng [1 ]
Chen, Junhao [1 ]
Yang, Bin
Feng, Qiying [2 ]
Chen, C. L. Philip [2 ]
机构
[1] Hunan Univ, Coll Elect & Informat Engn, Changsha 410082, Peoples R China
[2] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
基金
中国国家自然科学基金;
关键词
Noise measurement; Training; Learning systems; Data models; Adaptation models; Adaptive systems; Time series analysis; Adaptive weight calculation; broad learning system (BLS); elementwise reweighting; label noise learning; noisy data classification; RECOGNITION; REGRESSION; MACHINE;
D O I
10.1109/TNNLS.2023.3317255
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Broad learning system (BLS) is a novel neural network with efficient learning and expansion capacity, but it is sensitive to noise. Accordingly, the existing robust broad models try to suppress noise by assigning each sample an appropriate scalar weight to tune down the contribution of noisy samples in network training. However, they disregard the useful information of the noncorrupted elements hidden in the noisy samples, leading to unsatisfactory performance. To this end, a novel BLS with adaptive reweighting (BLS-AR) strategy is proposed in this article for the classification of data with label noise. Different from the previous works, the BLS-AR learns for each sample a weight vector rather than a scalar weight to indicate the noise degree of each element in the sample, which extends the reweighting strategy from sample level to element level. This enables the proposed network to precisely identify noisy elements and thus highlight the contribution of informative ones to train a more accurate representation model. Thanks to the separability of the model, the proposed network can be divided into several subnetworks, each of which can be trained efficiently. In addition, three corresponding incremental learning algorithms of the BLS-AR are developed for adding new samples or expanding the network. Substantial experiments are conducted to explicate the effectiveness and robustness of the proposed BLS-AR model.
引用
收藏
页码:18512 / 18524
页数:13
相关论文
共 56 条
  • [1] Asuncion Arthur, 2007, UCI machine learning repository
  • [2] Ben-Israel, 2003, GEN INVERSES THEORY
  • [3] Bengio Y., 2009, INT C MACH LEARN
  • [4] Universal Approximation Capability of Broad Learning System and Its Structural Variations
    Chen, C. L. Philip
    Liu, Zhulin
    Feng, Shuang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (04) : 1191 - 1204
  • [5] Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture
    Chen, C. L. Philip
    Liu, Zhulin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) : 10 - 24
  • [6] Weighted Couple Sparse Representation With Classified Regularization for Impulse Noise Removal
    Chen, Chun Lung Philip
    Liu, Licheng
    Chen, Long
    Tang, Yuan Yan
    Zhou, Yicong
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) : 4014 - 4026
  • [7] Weighted Broad Learning System and Its Application in Nonlinear Industrial Process Modeling
    Chu, Fei
    Liang, Tao
    Chen, C. L. Philip
    Wang, Xuesong
    Ma, Xiaoping
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (08) : 3017 - 3031
  • [8] Parameter-Free Loss for Class-Imbalanced Deep Learning in Image Classification
    Du, Jie
    Zhou, Yanhong
    Liu, Peng
    Vong, Chi-Man
    Wang, Tianfu
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (06) : 3234 - 3240
  • [9] Novel Efficient RNN and LSTM-Like Architectures: Recurrent and Gated Broad Learning Systems and Their Applications for Text Classification
    Du, Jie
    Vong, Chi-Man
    Chen, C. L. Philip
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2021, 51 (03) : 1586 - 1597
  • [10] Broad and deep neural network for high-dimensional data representation learning
    Feng, Qiying
    Liu, Zhulin
    Chen, C. L. Philip
    [J]. INFORMATION SCIENCES, 2022, 599 : 127 - 146