An Incremental-Self-Training-Guided Semi-Supervised Broad Learning System

被引:9
作者
Guo, Jifeng [1 ,2 ]
Liu, Zhulin [1 ,2 ]
Chen, C. L. Philip [1 ,2 ,3 ,4 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
[2] Pazhou Lab, Brain & Affect Cognit Res Ctr, Guangzhou 510335, Peoples R China
[3] Minist Educ Hlth Intelligent Percept & Parallel Di, Engn Res Ctr, Guangzhou 510006, Peoples R China
[4] Guangdong Prov Key Lab Computat Intelligence & Cyb, Guangzhou 510006, Peoples R China
关键词
Broad learning system (BLS); clustering; incremental-self-training (IST); semi-supervised learning; CLASSIFICATION;
D O I
10.1109/TNNLS.2024.3392583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The broad learning system (BLS) has recently been applied in numerous fields. However, it is mainly a supervised learning system and thus not suitable for specific practical applications with a mixture of labeled and unlabeled data. Despite a manifold regularization-based semi-supervised BLS, its performance still requires improvement, because its assumption is not always applicable. Therefore, this article proposes an incremental-self-training-guided semi-supervised BLS (ISTSS-BLS). Distinctive to traditional self-training, where all unlabeled data are labeled simultaneously, incremental self-training (IST) obtains unlabeled data incrementally from an established sorted list based on the distance between the data and their cluster center. During iterative learning, a small portion of labeled data is first used to train BLS. The system recursively self-updates its structure and meta-parameters using: 1) the double-restricted mechanism and 2) the dynamic neuron-incremental mechanism. The double-restricted mechanism is beneficial to preventing the introduction of incorrect pseudo-labeled samples, and the dynamic neuron-incremental mechanism guides the self-updating of the network structure effectively based on the training accuracy of the labeled data. These strategies guarantee a parsimonious model during the update. Besides, a novel metric, the accuracy-time ratio (A/T), is proposed to evaluate the model's performance comprehensively regarding time and accuracy. In experimental verifications, ISTSS-BLS performs outstandingly on 11 datasets. Specifically, the IST is compared with the traditional one on three scales data, saving up to 52.02% learning time. In addition, ISTSS-BLS is compared with different state-of-the-art alternatives, and all results indicate that it possesses significant advantages in performance.
引用
收藏
页码:7196 / 7210
页数:15
相关论文
共 68 条
[1]  
Belkin M, 2006, J MACH LEARN RES, V7, P2399
[2]  
Bennett KP, 1999, ADV NEUR IN, V11, P368
[3]   Modelling Conditional Dependence Between Response Time and Accuracy [J].
Bolsinova, Maria ;
de Boeck, Paul ;
Tijmstra, Jesper .
PSYCHOMETRIKA, 2017, 82 (04) :1126-1148
[4]  
Chen Baixu, 2022, ADV NEUR IN
[5]   Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture [J].
Chen, C. L. Philip ;
Liu, Zhulin .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) :10-24
[6]   Universal Approximation Capability of Broad Learning System and Its Structural Variations [J].
Chen, C. L. Philip ;
Liu, Zhulin ;
Feng, Shuang .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (04) :1191-1204
[7]   Data-intensive applications, challenges, techniques and technologies: A survey on Big Data [J].
Chen, C. L. Philip ;
Zhang, Chun-Yang .
INFORMATION SCIENCES, 2014, 275 :314-347
[8]   Uncertainty-Aware Distillation for Semi-Supervised Few-Shot Class-Incremental Learning [J].
Cui, Yawen ;
Deng, Wanxia ;
Chen, Haoyu ;
Liu, Li .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (10) :14259-14272
[9]  
Das M, 2020, AAAI CONF ARTIF INTE, V34, P3717
[10]  
Demsar J, 2006, J MACH LEARN RES, V7, P1