Compressing neural networks with two-layer decoupling

被引:0
|
作者
De Jonghe, Joppe [1 ]
Usevich, Konstantin [2 ]
Dreesen, Philippe [3 ]
Ishteva, Mariya [1 ]
机构
[1] Katholieke Univ Leuven, Dept Comp Sci, Geel, Belgium
[2] Univ Lorraine, CNRS, Nancy, France
[3] Maastricht Univ, DACS, Maastricht, Netherlands
来源
2023 IEEE 9TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING, CAMSAP | 2023年
关键词
tensor; tensor decomposition; decoupling; compression; neural network; MODEL COMPRESSION; ACCELERATION;
D O I
10.1109/CAMSAP58249.2023.10403509
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The single-layer decoupling problem has recently been used for the compression of neural networks. However, methods that are based on the single-layer decoupling problem only allow the compression of a neural network to a single flexible layer. As a result, compressing more complex networks leads to worse approximations of the original network due to only having one flexible layer. Having the ability to compress to more than one flexible layer thus allows to better approximate the underlying network compared to compression into only a single flexible layer. Performing compression into more than one flexible layer corresponds to solving a multi-layer decoupling problem. As a first step towards general multi-layer decoupling, this work introduces a method for solving the two-layer decoupling problem in the approximate case. This method enables the compression of neural networks into two flexible layers.
引用
收藏
页码:226 / 230
页数:5
相关论文
共 50 条
  • [21] Development of a two-layer neural network for a self-routing analog-to-digital converter based on a neural network
    Posyagin A.I.
    Yuzhakov A.A.
    Russian Electrical Engineering, 2013, 84 (11) : 602 - 605
  • [22] Learning Two-Layer ReLU Networks Is Nearly as Easy as Learning Linear Classifiers on Separable Data
    Yang, Qiuling
    Sadeghi, Alireza
    Wang, Gang
    Sun, Jian
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 4416 - 4427
  • [23] CIRCNN: Accelerating and Compressing Deep Neural Networks Using Block-Circulant Weight Matrices
    Ding, Caiwen
    Liao, Siyu
    Wang, Yanzhi
    Li, Zhe
    Liu, Ning
    Zhuo, Youwei
    Wang, Chao
    Qian, Xuehai
    Bai, Yu
    Yuan, Geng
    Ma, Xiaolong
    Zhang, Yipeng
    Tang, Jian
    Qiu, Qinru
    Lin, Xue
    Yuan, Bo
    50TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2017, : 395 - 408
  • [24] Compressing convolutional neural networks with hierarchical Tucker-2 decomposition
    Gabor, Mateusz
    Zdunek, Rafal
    APPLIED SOFT COMPUTING, 2023, 132
  • [25] PSM-nets: Compressing Neural Networks with Product of Sparse Matrices
    Giffon, Luc
    Ayache, Stephane
    Kadri, Hachem
    Artieres, Thierry
    Sicre, Ronan
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [26] Accelerating and Compressing Deep Neural Networks for Massive MIMO CSI Feedback
    Erak, Omar
    Abou-Zeid, Hatem
    ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1029 - 1035
  • [27] Expressive Numbers of Two or More Hidden Layer ReLU Neural Networks
    Inoue, Kenta
    2019 SEVENTH INTERNATIONAL SYMPOSIUM ON COMPUTING AND NETWORKING WORKSHOPS (CANDARW 2019), 2019, : 129 - 135
  • [28] Improving shallow neural network by compressing deep neural network
    Carvalho, Marcus
    Pratama, Mahardhika
    2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 1382 - 1387
  • [29] Adaptive two-layer ReLU neural network: I. Best least-squares approximation
    Liu, Min
    Cai, Zhiqiang
    Chen, Jingshuang
    COMPUTERS & MATHEMATICS WITH APPLICATIONS, 2022, 113 : 34 - 44
  • [30] On Compressing Social Networks
    Chierichetti, Flavio
    Kumar, Ravi
    Lattanzi, Silvio
    Mitzenmacher, Michael
    Pancones, Alessandro
    Raghavan, Prabhakar
    KDD-09: 15TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2009, : 219 - 227