A Construction of Robust Representations for Small Data Sets Using Broad Learning System

被引:18
作者
Tang, Huimin [1 ,2 ]
Dong, Peiwu [1 ]
Shi, Yong [2 ,3 ,4 ,5 ]
机构
[1] Beijing Inst Technol, Sch Management & Econ, Beijing 100081, Peoples R China
[2] Univ Nebraska, Coll Informat Sci & Technol, Omaha, NE 68182 USA
[3] Chinese Acad Sci, Res Ctr Fictitious Econ & Data Sci, Beijing 100190, Peoples R China
[4] Chinese Acad Sci, Key Lab Big Data Min & Knowledge Management, Beijing 100190, Peoples R China
[5] Southwest Minzu Univ, Coll Elect & Informat Engn, Chengdu 610041, Peoples R China
来源
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS | 2021年 / 51卷 / 10期
基金
中国国家自然科学基金;
关键词
Feature extraction; Machine learning; Zinc; Neural networks; Learning systems; Task analysis; Data models; Broad learning system (BLS); feature extraction; label-based autoencoder (LA); random LA (RLA); robust representation; FEATURE-SELECTION; MACHINE; IDENTIFICATION; APPROXIMATION; AUTOENCODER; PCA;
D O I
10.1109/TSMC.2019.2957818
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Feature processing is an important step for modeling and can improve the accuracy of machine learning models. Feature extraction methods can effectively extract features from high-dimensional data sets and enhance the accuracy of tasks. However, the performance of feature extraction methods is not stable in low-dimensional data sets. This article extends the broad learning system (BLS) to a framework for constructing robust representations in low-dimensional and small data sets. First, the BLS changed from a supervised prediction method to an ensemble feature extraction method. Second, feature extraction methods instead of random mapping are used to generate mapped features. Third, deep representations, called enhancement features, are learned from the ensemble mapped features. Fourth, data for generating mapped features and enhancement features can be randomly selected. The ensemble of mapped features and enhancement features can provide robust representations to enhance the performance of downstream tasks. A label-based autoencoder (LA) is embedded in the BLS framework as an example to show the effectiveness of the framework. A random LA (RLA) is presented to generate more different features. The experimental results show that the BLS framework can construct robust representations and significantly promote the performance of machine learning models.
引用
收藏
页码:6074 / 6084
页数:11
相关论文
共 55 条
  • [1] Phishing detection based Associative Classification data mining
    Abdelhamid, Neda
    Ayesh, Aladdin
    Thabtah, Fadi
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2014, 41 (13) : 5948 - 5959
  • [2] An Empirical Study for PCA- and LDA-Based Feature Reduction for Gas Identification
    Akbar, Muhammad Ali
    Ali, Amine Ait Si
    Amira, Abbes
    Bensaali, Faycal
    Benammar, Mohieddine
    Hassan, Muhammad
    Bermak, Amine
    [J]. IEEE SENSORS JOURNAL, 2016, 16 (14) : 5734 - 5746
  • [3] Selection of relevant features and examples in machine learning
    Blum, AL
    Langley, P
    [J]. ARTIFICIAL INTELLIGENCE, 1997, 97 (1-2) : 245 - 271
  • [4] Random forests
    Breiman, L
    [J]. MACHINE LEARNING, 2001, 45 (01) : 5 - 32
  • [5] A survey on feature selection methods
    Chandrashekar, Girish
    Sahin, Ferat
    [J]. COMPUTERS & ELECTRICAL ENGINEERING, 2014, 40 (01) : 16 - 28
  • [6] Universal Approximation Capability of Broad Learning System and Its Structural Variations
    Chen, C. L. Philip
    Liu, Zhulin
    Feng, Shuang
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2019, 30 (04) : 1191 - 1204
  • [7] Broad Learning System: An Effective and Efficient Incremental Learning System Without the Need for Deep Architecture
    Chen, C. L. Philip
    Liu, Zhulin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2018, 29 (01) : 10 - 24
  • [8] XGBoost: A Scalable Tree Boosting System
    Chen, Tianqi
    Guestrin, Carlos
    [J]. KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 785 - 794
  • [9] Cherkassky V, 1997, IEEE Trans Neural Netw, V8, P1564, DOI 10.1109/TNN.1997.641482
  • [10] Sparse Autoencoder-based Feature Transfer Learning for Speech Emotion Recognition
    Deng, Jun
    Zhang, Zixing
    Marchi, Erik
    Schuller, Bjoern
    [J]. 2013 HUMAINE ASSOCIATION CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION (ACII), 2013, : 511 - 516