An Incremental-Self-Training-Guided Semi-Supervised Broad Learning System

被引:9
作者
Guo, Jifeng [1 ,2 ]
Liu, Zhulin [1 ,2 ]
Chen, C. L. Philip [1 ,2 ,3 ,4 ]
机构
[1] South China Univ Technol, Sch Comp Sci & Engn, Guangzhou 510641, Peoples R China
[2] Pazhou Lab, Brain & Affect Cognit Res Ctr, Guangzhou 510335, Peoples R China
[3] Minist Educ Hlth Intelligent Percept & Parallel Di, Engn Res Ctr, Guangzhou 510006, Peoples R China
[4] Guangdong Prov Key Lab Computat Intelligence & Cyb, Guangzhou 510006, Peoples R China
关键词
Broad learning system (BLS); clustering; incremental-self-training (IST); semi-supervised learning; CLASSIFICATION;
D O I
10.1109/TNNLS.2024.3392583
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The broad learning system (BLS) has recently been applied in numerous fields. However, it is mainly a supervised learning system and thus not suitable for specific practical applications with a mixture of labeled and unlabeled data. Despite a manifold regularization-based semi-supervised BLS, its performance still requires improvement, because its assumption is not always applicable. Therefore, this article proposes an incremental-self-training-guided semi-supervised BLS (ISTSS-BLS). Distinctive to traditional self-training, where all unlabeled data are labeled simultaneously, incremental self-training (IST) obtains unlabeled data incrementally from an established sorted list based on the distance between the data and their cluster center. During iterative learning, a small portion of labeled data is first used to train BLS. The system recursively self-updates its structure and meta-parameters using: 1) the double-restricted mechanism and 2) the dynamic neuron-incremental mechanism. The double-restricted mechanism is beneficial to preventing the introduction of incorrect pseudo-labeled samples, and the dynamic neuron-incremental mechanism guides the self-updating of the network structure effectively based on the training accuracy of the labeled data. These strategies guarantee a parsimonious model during the update. Besides, a novel metric, the accuracy-time ratio (A/T), is proposed to evaluate the model's performance comprehensively regarding time and accuracy. In experimental verifications, ISTSS-BLS performs outstandingly on 11 datasets. Specifically, the IST is compared with the traditional one on three scales data, saving up to 52.02% learning time. In addition, ISTSS-BLS is compared with different state-of-the-art alternatives, and all results indicate that it possesses significant advantages in performance.
引用
收藏
页码:7196 / 7210
页数:15
相关论文
共 68 条
[31]  
Lee Dong-Hyun., 2013, WORKSHOP CHALLENGES, V3
[32]   An effective framework based on local cores for self-labeled semi-supervised classification [J].
Li, Junnan ;
Zhu, Qingsheng ;
Wu, Quanwang ;
Cheng, Dongdong .
KNOWLEDGE-BASED SYSTEMS, 2020, 197
[33]   Face Sketch Synthesis Using Regularized Broad Learning System [J].
Li, Ping ;
Sheng, Bin ;
Chen, C. L. Philip .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 33 (10) :5346-5360
[34]   A self-training semi-supervised SVM algorithm and its application in an EEG-based brain computer interface speller system [J].
Li, Yuanqing ;
Guan, Cuntai ;
Li, Huiqi ;
Chin, Zhengyang .
PATTERN RECOGNITION LETTERS, 2008, 29 (09) :1285-1294
[35]   Modal-Regression-Based Broad Learning System for Robust Regression and Classification [J].
Liu, Licheng ;
Liu, Tingyun ;
Chen, C. L. Philip ;
Wang, Yaonan .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) :12344-12357
[36]  
Liu SR, 2016, IEEE ICCSS 2016 - 2016 3RD INTERNATIONAL CONFERENCE ON INFORMATIVE AND CYBERNETICS FOR COMPUTATIONAL SOCIAL SYSTEMS (ICCSS), P81, DOI 10.1109/ICCSS.2016.7586428
[37]   An Online Active Broad Learning Approach for Real-Time Safety Assessment of Dynamic Systems in Nonstationary Environments [J].
Liu, Zeyi ;
Zhang, Yi ;
Ding, Zhongjun ;
He, Xiao .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (10) :6714-6724
[38]   Stacked Broad Learning System: From Incremental Flatted Structure to Deep Model [J].
Liu, Zhulin ;
Chen, C. L. Philip ;
Feng, Shuang ;
Feng, Qiying ;
Zhang, Tong .
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS, 2021, 51 (01) :209-222
[39]  
Liu ZL, 2019, IEEE SYS MAN CYBERN, P193, DOI [10.1109/SMC.2019.8914328, 10.1109/smc.2019.8914328]
[40]   Fine-Grained VisualText Prompt-Driven Self-Training for Open-Vocabulary Object Detection [J].
Long, Yanxin ;
Han, Jianhua ;
Huang, Runhui ;
Xu, Hang ;
Zhu, Yi ;
Xu, Chunjing ;
Liang, Xiaodan .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (11) :16277-16287