Frequency Regularization: Reducing Information Redundancy in Convolutional Neural Networks

被引:0
|
作者
Zhao, Chenqiu [1 ]
Dong, Guanfang [1 ]
Zhang, Shupei [1 ]
Tan, Zijie [1 ]
Basu, Anup [1 ]
机构
[1] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2R3, Canada
关键词
Tensors; Frequency-domain analysis; Training; Convolutional neural networks; Information processing; Transforms; Task analysis; Frequency domain; information redundancy; network regularization; convolutional neural network;
D O I
10.1109/ACCESS.2023.3320642
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks have demonstrated impressive results in many computer vision tasks. However, the increasing size of these networks raises concerns about the information overload resulting from the large number of network parameters. In this paper, we propose Frequency Regularization to restrict the non-zero elements of the network parameters in the frequency domain. The proposed approach operates at the tensor level, and can be applied to almost all network architectures. Specifically, the tensors of parameters are maintained in the frequency domain, where high-frequency components can be eliminated by zigzag setting tensor elements to zero. Then, the inverse discrete cosine transform (IDCT) is used to reconstruct the spatial tensors for matrix operations during network training. Since high-frequency components of images are known to be less critical, a large proportion of these parameters can be set to zero when networks are trained with the proposed frequency regularization. Comprehensive evaluations on various state-of-the-art network architectures, including LeNet, Alexnet, VGG, Resnet, ViT, UNet, GAN, and VAE, demonstrate the effectiveness of the proposed frequency regularization. For a very small accuracy decrease (less than 2%), a LeNet5 with 0.4M parameters can be represented by only 776 float16 numbers (over 1100x reduction), and a UNet with 34M parameters can be represented by only 759 float16 numbers (over 80000x reduction). In particular, the original size of the UNet model is reduced from 366 Mb to 4.5 Kb.
引用
收藏
页码:106793 / 106802
页数:10
相关论文
共 50 条
  • [21] BranchOut: Regularization for Online Ensemble Tracking with Convolutional Neural Networks
    Han, Bohyung
    Sim, Jack
    Adam, Hartwig
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 521 - 530
  • [22] Max-Pooling Dropout for Regularization of Convolutional Neural Networks
    Wu, Haibing
    Gu, Xiaodong
    NEURAL INFORMATION PROCESSING, PT I, 2015, 9489 : 46 - 54
  • [23] Local Manifold Regularization for Knowledge Transfer in Convolutional Neural Networks
    Theodorakopoulos, Ilias
    Fotopoulou, Foteini
    Economou, George
    2020 11TH INTERNATIONAL CONFERENCE ON INFORMATION, INTELLIGENCE, SYSTEMS AND APPLICATIONS (IISA 2020), 2020, : 20 - 27
  • [24] DropFilterR: A Novel Regularization Method for Learning Convolutional Neural Networks
    Hengyue Pan
    Xin Niu
    Rongchun Li
    Siqi Shen
    Yong Dou
    Neural Processing Letters, 2020, 51 : 1285 - 1298
  • [25] DropFilterR: A Novel Regularization Method for Learning Convolutional Neural Networks
    Pan, Hengyue
    Niu, Xin
    Li, Rongchun
    Shen, Siqi
    Dou, Yong
    NEURAL PROCESSING LETTERS, 2020, 51 (02) : 1285 - 1298
  • [26] KRR-CNN: kernels redundancy reduction in convolutional neural networks
    El houssaine Hssayni
    Nour-Eddine Joudar
    Mohamed Ettaouil
    Neural Computing and Applications, 2022, 34 : 2443 - 2454
  • [27] KRR-CNN: kernels redundancy reduction in convolutional neural networks
    Hssayni, El Houssaine
    Joudar, Nour-Eddine
    Ettaouil, Mohamed
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (03): : 2443 - 2454
  • [28] Packing Convolutional Neural Networks in the Frequency Domain
    Wang, Yunhe
    Xu, Chang
    Xu, Chao
    Tao, Dacheng
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2019, 41 (10) : 2495 - 2510
  • [29] Improved Convolutional Neural Networks by Integrating High-frequency Information for Image Classification
    Zhuang, Chengyuan
    Yuan, Xiaohui
    Guo, Xuan
    Wei, Zhenchun
    Xu, Juan
    Fan, Yuqi
    2023 2ND ASIA CONFERENCE ON ALGORITHMS, COMPUTING AND MACHINE LEARNING, CACML 2023, 2023, : 429 - 434
  • [30] Compressing Convolutional Neural Networks in the Frequency Domain
    Chen, Wenlin
    Wilson, James
    Tyree, Stephen
    Weinberger, Kilian Q.
    Chen, Yixin
    KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1475 - 1484