Frequency Regularization: Reducing Information Redundancy in Convolutional Neural Networks

被引:0
|
作者
Zhao, Chenqiu [1 ]
Dong, Guanfang [1 ]
Zhang, Shupei [1 ]
Tan, Zijie [1 ]
Basu, Anup [1 ]
机构
[1] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2R3, Canada
关键词
Tensors; Frequency-domain analysis; Training; Convolutional neural networks; Information processing; Transforms; Task analysis; Frequency domain; information redundancy; network regularization; convolutional neural network;
D O I
10.1109/ACCESS.2023.3320642
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks have demonstrated impressive results in many computer vision tasks. However, the increasing size of these networks raises concerns about the information overload resulting from the large number of network parameters. In this paper, we propose Frequency Regularization to restrict the non-zero elements of the network parameters in the frequency domain. The proposed approach operates at the tensor level, and can be applied to almost all network architectures. Specifically, the tensors of parameters are maintained in the frequency domain, where high-frequency components can be eliminated by zigzag setting tensor elements to zero. Then, the inverse discrete cosine transform (IDCT) is used to reconstruct the spatial tensors for matrix operations during network training. Since high-frequency components of images are known to be less critical, a large proportion of these parameters can be set to zero when networks are trained with the proposed frequency regularization. Comprehensive evaluations on various state-of-the-art network architectures, including LeNet, Alexnet, VGG, Resnet, ViT, UNet, GAN, and VAE, demonstrate the effectiveness of the proposed frequency regularization. For a very small accuracy decrease (less than 2%), a LeNet5 with 0.4M parameters can be represented by only 776 float16 numbers (over 1100x reduction), and a UNet with 34M parameters can be represented by only 759 float16 numbers (over 80000x reduction). In particular, the original size of the UNet model is reduced from 366 Mb to 4.5 Kb.
引用
收藏
页码:106793 / 106802
页数:10
相关论文
共 50 条
  • [31] Distributed Information Integration in Convolutional Neural Networks
    Kumar, Dinesh
    Sharma, Dharmendra
    PROCEEDINGS OF THE 15TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 5: VISAPP, 2020, : 491 - 498
  • [32] Information Bottleneck Theory on Convolutional Neural Networks
    Junjie Li
    Ding Liu
    Neural Processing Letters, 2021, 53 : 1385 - 1400
  • [33] Information Bottleneck Theory on Convolutional Neural Networks
    Li, Junjie
    Liu, Ding
    NEURAL PROCESSING LETTERS, 2021, 53 (02) : 1385 - 1400
  • [34] Revisiting Orthogonality Regularization: A Study for Convolutional Neural Networks in Image Classification
    Kim, Taehyeon
    Yun, Se-Young
    IEEE Access, 2022, 10 : 69741 - 69749
  • [35] A novel companion objective function for regularization of deep convolutional neural networks
    Sun, Weichen
    Su, Fei
    IMAGE AND VISION COMPUTING, 2017, 60 : 58 - 63
  • [36] SWAP-NODE: A REGULARIZATION APPROACH FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
    Yamashita, Takayoshi
    Tanaka, Masayuki
    Yamauchi, Yuji
    Fujiyoshi, Hironobu
    2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 2475 - 2479
  • [37] Implicit Regularization in Hierarchical Tensor Factorization and Deep Convolutional Neural Networks
    Razin, Noam
    Maman, Asaf
    Cohen, Nadav
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [38] Geometric Regularization of Local Activations for Knowledge Transfer in Convolutional Neural Networks
    Theodorakopoulos, Ilias
    Fotopoulou, Foteini
    Economou, George
    INFORMATION, 2021, 12 (08)
  • [39] Regularization and Iterative Initialization of Softmax for Fast Training of Convolutional Neural Networks
    Rao, Qiang
    Yu, Bing
    He, Kun
    Feng, Bailan
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [40] Revisiting Orthogonality Regularization: A Study for Convolutional Neural Networks in Image Classification
    Kim, Taehyeon
    Yun, Se-Young
    IEEE ACCESS, 2022, 10 : 69741 - 69749