Effective sparsity control in deep belief networks using normal regularization term

被引:0
|
作者
Mohammad Ali Keyvanrad
Mohammad Mehdi Homayounpour
机构
[1] Amirkabir University of Technology,Laboratory for Intelligent Multimedia Processing (LIMP)
来源
Knowledge and Information Systems | 2017年 / 53卷
关键词
Deep belief network; Restricted Boltzmann machine; Normal sparse RBM; Quadratic sparse RBM; Rate distortion sparse RBM;
D O I
暂无
中图分类号
学科分类号
摘要
Nowadays the use of deep network architectures has become widespread in machine learning. Deep belief networks (DBNs) have deep network architectures to create a powerful generative model using training data. Deep belief networks can be used in classification and feature learning. A DBN can be learned unsupervised, and then the learned features are suitable for a simple classifier (like a linear classifier) with a few labeled data. In addition, according to researches, by using sparsity in DBNs we can learn useful low-level feature representations for unlabeled data. In sparse representation, we have the property that learned features can be interpreted, i.e., correspond to meaningful aspects of the input, and capture factors of variation in the data. Different methods are proposed to build sparse DBNs. In this paper, we proposed a new method that has different behavior according to deviation of the activation of the hidden units from a (low) fixed value. In addition, our proposed regularization term has a variance parameter that can control the force degree of sparseness. According to the results, our new method achieves the best recognition accuracy on the test sets in different datasets with different applications (image, speech and text) and we can achieve incredible results when using a different number of training samples, especially when we have a few samples for training.
引用
收藏
页码:533 / 550
页数:17
相关论文
共 50 条
  • [1] Effective sparsity control in deep belief networks using normal regularization term
    Keyvanrad, Mohammad
    Homayounpour, Mohammad Mehdi
    KNOWLEDGE AND INFORMATION SYSTEMS, 2017, 53 (02) : 533 - 550
  • [2] Dynamic sparsity control in Deep Belief Networks
    Keyvanrad, Mohammad Ali
    Homayounpour, Mohammad Mehdi
    INTELLIGENT DATA ANALYSIS, 2017, 21 (04) : 963 - 979
  • [3] Deep belief networks with self-adaptive sparsity
    Qiao, Chen
    Yang, Lan
    Shi, Yan
    Fang, Hanfeng
    Kang, Yanmei
    APPLIED INTELLIGENCE, 2022, 52 (01) : 237 - 253
  • [4] Deep belief networks with self-adaptive sparsity
    Chen Qiao
    Lan Yang
    Yan Shi
    Hanfeng Fang
    Yanmei Kang
    Applied Intelligence, 2022, 52 : 237 - 253
  • [5] Retrieval term prediction using deep belief networks
    Ma, Qing
    Tanigawa, Ibuki
    Murata, Masaki
    Proceedings of the 28th Pacific Asia Conference on Language, Information and Computation, PACLIC 2014, 2014, : 338 - 347
  • [6] Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods
    Farokhmanesh, Fatemeh
    Sadeghi, Mohammad Taghi
    NEURAL PROCESSING LETTERS, 2021, 53 (01) : 701 - 720
  • [7] Deep Neural Networks Regularization Using a Combination of Sparsity Inducing Feature Selection Methods
    Fatemeh Farokhmanesh
    Mohammad Taghi Sadeghi
    Neural Processing Letters, 2021, 53 : 701 - 720
  • [8] Compressing Low Precision Deep Neural Networks Using Sparsity-Induced Regularization in Ternary Networks
    Faraone, Julian
    Fraser, Nicholas
    Gambardella, Giulio
    Blott, Michaela
    Leong, Philip H. W.
    NEURAL INFORMATION PROCESSING (ICONIP 2017), PT II, 2017, 10635 : 393 - 404
  • [9] Feature flow regularization: Improving structured sparsity in deep neural networks
    Wu, Yue
    Lan, Yuan
    Zhang, Luchan
    Xiang, Yang
    NEURAL NETWORKS, 2023, 161 : 598 - 613
  • [10] Structured Pruning for Deep Convolutional Neural Networks via Adaptive Sparsity Regularization
    Shao, Tuanjie
    Shin, Dongkun
    2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 982 - 987