Lightweight Model by Iterative Momentum Pruning Scheme for Channel Reduction

被引:0
作者
Chen, Oscal Tzyh-Chiang [1 ]
Lu, Yen-Cheng [1 ]
Chang, Yu-Xuan [1 ]
机构
[1] Natl Chung Cheng Univ, Dept Elect Engn, Chiayi 62102, Taiwan
来源
2022 IEEE INTL CONF ON DEPENDABLE, AUTONOMIC AND SECURE COMPUTING, INTL CONF ON PERVASIVE INTELLIGENCE AND COMPUTING, INTL CONF ON CLOUD AND BIG DATA COMPUTING, INTL CONF ON CYBER SCIENCE AND TECHNOLOGY CONGRESS (DASC/PICOM/CBDCOM/CYBERSCITECH) | 2022年
关键词
Neural network compression; channel pruning; momenum pruning; lightweight model;
D O I
10.1109/DASC/PiCom/CBDCom/Cy55231.2022.9927760
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
How to effectively trim the original model to become a lightweight one is an important issue for edge computing. This work develops an Iterative Momentum Pruning (IMP) scheme which adopts the adaptive threshold from scaling factors of batch normalized weights of the sparse model and channel ratio of each layer, and trims the model via multiple iterations. The threshold equation includes the momentum and channel-portion terms which come from the standard deviations of scaling factors of batch normalized layers, and channel number of each layer versus the total channel number of the model, respectively, accompanied with a scaling factor. The simulation results of VGG models at Cifar10 and Cifar100 reveal that proposed IMP scheme is superior to the conventional pruning schemes at low compression ratios in terms of accuracy and parameter quantity. At small pruning ratios, the momentum term in our IMP scheme plays a major role to truncate the model. The phenomenon illustrates that the channel trimming based on the momenta of scaling factors of batch normalized layers is an effective way. Especially, the proposed IMP scheme can work with neural architecture search approach at channel search and reduction to even more achieve excellent performance and low complexity according to the simulation results of YOLOv4 at BDD100K.
引用
收藏
页码:52 / 55
页数:4
相关论文
共 18 条
  • [1] Bochkovskiy A, 2020, Arxiv, DOI arXiv:2004.10934
  • [2] Chang Y. -L., 2021, 2021 IEEE PES INNOVA, P1
  • [3] SGD:: Saccharomyces Genome Database
    Cherry, JM
    Adler, C
    Ball, C
    Chervitz, SA
    Dwight, SS
    Hester, ET
    Jia, YK
    Juvik, G
    Roe, T
    Schroeder, M
    Weng, SA
    Botstein, D
    [J]. NUCLEIC ACIDS RESEARCH, 1998, 26 (01) : 73 - 79
  • [4] Frankle J., 2018, arXiv
  • [5] Gardner M, 2018, Arxiv, DOI arXiv:1803.07640
  • [6] Quantization
    Gray, RM
    Neuhoff, DL
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 1998, 44 (06) : 2325 - 2383
  • [7] Guo Y., 2016, Advances in Neural Information Processing Systems, P1387
  • [8] Han S, 2015, Arxiv, DOI [arXiv:1506.02626, DOI 10.48550/ARXIV.1506.02626]
  • [9] Hanson S., 1988, Adv. Neur. Inf. Process. Syst, V1, P177
  • [10] Hinton Geoffrey, 2015, DISTILLING KNOWLEDGE