Broad learning system with manifold regularized sparse features for semi-supervised classification

被引:18
作者
Huang, Shiluo [1 ]
Liu, Zheng [1 ]
Jin, Wei [1 ,2 ]
Mu, Ying [1 ]
机构
[1] Zhejiang Univ, State Key Lab Ind Control Technol, Inst Cyber Syst & Control, Res Ctr Analyt Instrumentat, Hangzhou 310058, Peoples R China
[2] Zhejiang Univ, Huzhou Inst, Huzhou 313000, Peoples R China
基金
中国国家自然科学基金;
关键词
Broad learning system; Semi-supervised learning; Manifold regularization; REPRESENTATIONS; REGRESSION; MACHINE; MODEL;
D O I
10.1016/j.neucom.2021.08.052
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Broad learning system (BLS) is an efficient neural network, and is proven to be effective in fields like remote sensing, fault diagnosis, etc. As a critical branch of BLS, semi-supervised BLS has drawn increasing attention. Exploiting the information within additional unlabeled instances is key to semi-supervised learning. Studies have shown that incorporating this information into the feature nodes is a good way to implement semi-supervised BLS. However, the existing methods could not retain the sparsity of feature nodes. Besides that, these methods become computation consuming when dealing with the large scale datasets. To address these problems, a broad learning system with manifold regularized sparse features (BLS-MS) is proposed. We first propose a manifold regularized sparse autoencoder based on extreme learning machine (MS-ELM-AE) for feature mapping. Then, a subset training approach is introduced to alleviate the efficiency decline caused by large data size. Finally, the proposed BLS-MS is further modified to utilize the discriminant information of labeled data, namely discriminative BLS-MS (DBLSMS). The proposed methods have been evaluated on 14 datasets. Experiment results have demonstrated both the effectiveness and the efficiency of proposed methods. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页码:133 / 143
页数:11
相关论文
共 52 条
[1]  
Aref, 2015, SPECTRAL GRAPH THEOR, V1, P413
[2]   A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems [J].
Beck, Amir ;
Teboulle, Marc .
SIAM JOURNAL ON IMAGING SCIENCES, 2009, 2 (01) :183-202
[3]  
Belkin M, 2002, ADV NEUR IN, V14, P585
[4]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[5]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[6]  
Blum A., 1998, Proceedings of the Eleventh Annual Conference on Computational Learning Theory, P92, DOI 10.1145/279943.279962
[7]   Graph Regularized Nonnegative Matrix Factorization for Data Representation [J].
Cai, Deng ;
He, Xiaofei ;
Han, Jiawei ;
Huang, Thomas S. .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2011, 33 (08) :1548-1560
[8]  
Cai D, 2007, 20TH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P714
[9]  
Cao W., IEEE T KNOWL DATA EN, P1
[10]   Universal Approximation Capability of Broad Learning System and Its Structural Variations [J].
Chen, C. L. Philip ;
Liu, Zhulin ;
Feng, Shuang .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (04) :1191-1204