Frequency Regularization: Reducing Information Redundancy in Convolutional Neural Networks

被引:0
|
作者
Zhao, Chenqiu [1 ]
Dong, Guanfang [1 ]
Zhang, Shupei [1 ]
Tan, Zijie [1 ]
Basu, Anup [1 ]
机构
[1] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2R3, Canada
关键词
Tensors; Frequency-domain analysis; Training; Convolutional neural networks; Information processing; Transforms; Task analysis; Frequency domain; information redundancy; network regularization; convolutional neural network;
D O I
10.1109/ACCESS.2023.3320642
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Convolutional neural networks have demonstrated impressive results in many computer vision tasks. However, the increasing size of these networks raises concerns about the information overload resulting from the large number of network parameters. In this paper, we propose Frequency Regularization to restrict the non-zero elements of the network parameters in the frequency domain. The proposed approach operates at the tensor level, and can be applied to almost all network architectures. Specifically, the tensors of parameters are maintained in the frequency domain, where high-frequency components can be eliminated by zigzag setting tensor elements to zero. Then, the inverse discrete cosine transform (IDCT) is used to reconstruct the spatial tensors for matrix operations during network training. Since high-frequency components of images are known to be less critical, a large proportion of these parameters can be set to zero when networks are trained with the proposed frequency regularization. Comprehensive evaluations on various state-of-the-art network architectures, including LeNet, Alexnet, VGG, Resnet, ViT, UNet, GAN, and VAE, demonstrate the effectiveness of the proposed frequency regularization. For a very small accuracy decrease (less than 2%), a LeNet5 with 0.4M parameters can be represented by only 776 float16 numbers (over 1100x reduction), and a UNet with 34M parameters can be represented by only 759 float16 numbers (over 80000x reduction). In particular, the original size of the UNet model is reduced from 366 Mb to 4.5 Kb.
引用
收藏
页码:106793 / 106802
页数:10
相关论文
共 50 条
  • [1] Reducing Overfitting in Deep Convolutional Neural Networks Using Redundancy Regularizer
    Wu, Bingzhe
    Liu, Zhichao
    Yuan, Zhihang
    Sun, Guangyu
    Wu, Charles
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 49 - 55
  • [2] SlimConv: Reducing Channel Redundancy in Convolutional Neural Networks by Features Recombining
    Qiu, Jiaxiong
    Chen, Cai
    Liu, Shuaicheng
    Zhang, Heng-Yu
    Zeng, Bing
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 6434 - 6445
  • [3] Drop an Octave: Reducing Spatial Redundancy in Convolutional Neural Networks with Octave Convolution
    Chen, Yunpeng
    Fan, Haoqi
    Xu, Bing
    Yan, Zhicheng
    Kalantidis, Yannis
    Rohrbach, Marcus
    Yan, Shuicheng
    Feng, Jiashi
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 3434 - 3443
  • [4] Absum: Simple Regularization Method for Reducing Structural Sensitivity of Convolutional Neural Networks
    Kanai, Sekitoshi
    Ida, Yasutoshi
    Fujiwara, Yasuhiro
    Yamada, Masanori
    Adachi, Shuichi
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 4394 - 4403
  • [5] Convolutional Neural Networks With Dynamic Regularization
    Wang, Yi
    Bian, Zhen-Peng
    Hou, Junhui
    Chau, Lap-Pui
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (05) : 2299 - 2304
  • [6] Multiscale Conditional Regularization for Convolutional Neural Networks
    Lu, Yao
    Lu, Guangming
    Li, Jinxing
    Xu, Yuanrong
    Zhang, Zheng
    Zhang, David
    IEEE TRANSACTIONS ON CYBERNETICS, 2022, 52 (01) : 444 - 458
  • [7] LMix: regularization strategy for convolutional neural networks
    Yan, Linyu
    Zheng, Kunpeng
    Xia, Jinyao
    Li, Ke
    Ling, Hefei
    SIGNAL IMAGE AND VIDEO PROCESSING, 2023, 17 (04) : 1245 - 1253
  • [8] REGULARIZATION OF CONVOLUTIONAL NEURAL NETWORKS USING SHUFFLENODE
    Chen, Yihao
    Wang, Hanli
    Long, Yu
    2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2017, : 355 - 360
  • [9] On the regularization of convolutional kernel tensors in neural networks
    Guo, Pei-Chang
    Ye, Qiang
    LINEAR & MULTILINEAR ALGEBRA, 2022, 70 (12): : 2318 - 2330
  • [10] LMix: regularization strategy for convolutional neural networks
    Linyu Yan
    Kunpeng Zheng
    Jinyao Xia
    Ke Li
    Hefei Ling
    Signal, Image and Video Processing, 2023, 17 : 1245 - 1253