Broad and deep neural network for high-dimensional data representation learning

被引:19
作者
Feng, Qiying [1 ]
Liu, Zhulin [1 ]
Chen, C. L. Philip [1 ,2 ,3 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
[2] Pazhou Lab, Guangzhou 510335, Peoples R China
[3] Univ Macau, Fac Sci & Technol, Zhuhai, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Broading learning system; Broad and deep architecture; Representation learning; High-dimensional data; BOLTZMANN MACHINE; ALGORITHM;
D O I
10.1016/j.ins.2022.03.058
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Limited by the shallow structure, broad learning system (BLS) suffers from the high dimensional data classification difficulty. To this end, the cascade of the convolutional feature mappings and enhancement mappings broad learning system (CCFEBLS) framework is proposed from the perspective of representation learning in this article. Firstly, convolution kernels are exploited to construct the convolutional feature nodes and enhancement nodes instead of using sparse auto-enocder or linear combination in the BLS. Secondly, we design a novel broad and deep architecture which cascades the feature mappings and enhancement mappings as the broad and deep representations to connect the output directly for the CCFEBLS framework. This architecture utilizes all representations thoroughly and improves the representation learning capability. Moreover, to boost the robustness of the CCFEBLS, the weighted hyper-parameters and the group regularization are developed to adjust the broad and deep representations and require the group output directly approximate the label, respectively. And the experimental results on several synthetic and real world datasets have demonstrated that CCFEBLS models outperform the baselines with better performance, less parameters and training time, which are validated to be consistent with the model design and analysis.(c) 2022 Published by Elsevier Inc.
引用
收藏
页码:127 / 146
页数:20
相关论文
共 49 条
  • [1] Information Dropout: Learning Optimal Representations Through Noisy Computation
    Achille, Alessandro
    Soatto, Stefano
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) : 2897 - 2905
  • [2] [Anonymous], 2015, France
  • [3] Representation Learning: A Review and New Perspectives
    Bengio, Yoshua
    Courville, Aaron
    Vincent, Pascal
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) : 1798 - 1828
  • [4] Bryant FB., PRINCIPAL COMPONENTS
  • [5] Universal Approximation Capability of Broad Learning System and Its Structural Variations
    Chen, C. L. Philip
    Liu, Zhulin
    Feng, Shuang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (04) : 1191 - 1204
  • [6] Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture
    Chen, C. L. Philip
    Liu, Zhulin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) : 10 - 24
  • [7] I-Ching Divination Evolutionary Algorithm and its Convergence Analysis
    Chen, C. L. Philip
    Zhang, Tong
    Chen, Long
    Tam, Sik Chung
    [J]. IEEE TRANSACTIONS ON CYBERNETICS, 2017, 47 (01) : 2 - 13
  • [8] Fuzzy Restricted Boltzmann Machine for the Enhancement of Deep Learning
    Chen, C. L. Philip
    Zhang, Chun-Yang
    Chen, Long
    Gan, Min
    [J]. IEEE TRANSACTIONS ON FUZZY SYSTEMS, 2015, 23 (06) : 2163 - 2173
  • [9] A rapid supervised learning neural network for function interpolation and approximation
    Chen, CLP
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 1996, 7 (05): : 1220 - 1230
  • [10] A Multiple-Kernel Fuzzy C-Means Algorithm for Image Segmentation
    Chen, Long
    Chen, C. L. Philip
    Lu, Mingzhu
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2011, 41 (05): : 1263 - 1274