A Computationally Efficient Weight Pruning Algorithm for Artificial Neural Network Classifiers

被引:0
作者
Sakshi [1 ]
Kumar, Ravi [1 ]
机构
[1] Thapar Univ, Elect & Commun Engn Dept, Patiala 147004, Punjab, India
关键词
Weight pruning; Artificial neural network; Backpropagation; Complexity penalty; Fisher information; Pattern classification; MULTILAYER PERCEPTRONS;
D O I
10.1007/s13369-017-2887-2
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
A novel technique is being proposed to prune the weights of artificial neural networks (ANNs) while training with backpropagation algorithm. Iterative update of weights through gradient descent mechanism does not guarantee convergence in a specified number of epochs. Pruning of non-relevant weights not only reduces the computational complexity but also improves the classification performance. This algorithm first defines the relevance of initialized weights in a statistical sense by introducing a coefficient of dominance for each weight converging on a hidden node and subsequently employing the concept of complexity penalty. Based upon complexity penalty for each weight, a decision has been taken to either prune or retain the weight. It has been shown analytically that a weight with higher complexity penalty has a high degree of Fisher information which further implies its ability to capture the variations in the input set for better classification. Simulation experiments performed with five benchmark data sets reveal that ANNs trained after being pruned using the proposed technique exhibit higher convergence, lower execution time and higher success rate in the test phase and yields substantial reduction in computational resources. For complex architectures, early convergence was found to be directly correlated with percentage of weights pruned. The efficacy of the technique has been validated on several benchmark datasets having large diversity of attributes.
引用
收藏
页码:6787 / 6799
页数:13
相关论文
共 33 条
  • [1] [Anonymous], 1985, ICS8506 CAL U SAN DI
  • [2] Augasta M.G., 2003, CENT EUR J COMPUT SC, V3, P105
  • [3] Christiansen N., 2012, 25 NORD SEM COMP MEC
  • [4] Convolutional networks for fast, energy-efficient neuromorphic computing
    Esser, Steven K.
    Merolla, Paul A.
    Arthur, John V.
    Cassidy, Andrew S.
    Appuswamy, Rathinakumar
    Andreopoulos, Alexander
    Berg, David J.
    McKinstry, Jeffrey L.
    Melano, Timothy
    Barch, Davis R.
    di Nolfo, Carmelo
    Datta, Pallab
    Amir, Arnon
    Taba, Brian
    Flickner, Myron D.
    Modha, Dharmendra S.
    [J]. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 2016, 113 (41) : 11441 - 11446
  • [5] Effective pruning strategies for branch and bound Bayesian networks structure learning from data
    Etminani, K.
    Naghibzadeh, M.
    Razavi, A. R.
    [J]. SCIENTIA IRANICA, 2013, 20 (03) : 682 - 694
  • [6] A pruning algorithm with L 1/2 regularizer for extreme learning machine
    Fan, Ye-tian
    Wu, Wei
    Yang, Wen-yu
    Fan, Qin-wei
    Wang, Jian
    [J]. JOURNAL OF ZHEJIANG UNIVERSITY-SCIENCE C-COMPUTERS & ELECTRONICS, 2014, 15 (02): : 119 - 125
  • [7] Neural Network Architecture Selection: Can Function Complexity Help?
    Gomez, Ivan
    Franco, Leonardo
    Jerez, Jose M.
    [J]. NEURAL PROCESSING LETTERS, 2009, 30 (02) : 71 - 87
  • [8] Han S., 2015, ARXIV151000149 CORR
  • [9] EIE: Efficient Inference Engine on Compressed Deep Neural Network
    Han, Song
    Liu, Xingyu
    Mao, Huizi
    Pu, Jing
    Pedram, Ardavan
    Horowitz, Mark A.
    Dally, William J.
    [J]. 2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, : 243 - 254
  • [10] Hassibi B., 1993, Advances in Neural Information Processing Systems, P164