Compressing neural networks with two-layer decoupling

被引:0
|
作者
De Jonghe, Joppe [1 ]
Usevich, Konstantin [2 ]
Dreesen, Philippe [3 ]
Ishteva, Mariya [1 ]
机构
[1] Katholieke Univ Leuven, Dept Comp Sci, Geel, Belgium
[2] Univ Lorraine, CNRS, Nancy, France
[3] Maastricht Univ, DACS, Maastricht, Netherlands
来源
2023 IEEE 9TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING, CAMSAP | 2023年
关键词
tensor; tensor decomposition; decoupling; compression; neural network; MODEL COMPRESSION; ACCELERATION;
D O I
10.1109/CAMSAP58249.2023.10403509
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
The single-layer decoupling problem has recently been used for the compression of neural networks. However, methods that are based on the single-layer decoupling problem only allow the compression of a neural network to a single flexible layer. As a result, compressing more complex networks leads to worse approximations of the original network due to only having one flexible layer. Having the ability to compress to more than one flexible layer thus allows to better approximate the underlying network compared to compression into only a single flexible layer. Performing compression into more than one flexible layer corresponds to solving a multi-layer decoupling problem. As a first step towards general multi-layer decoupling, this work introduces a method for solving the two-layer decoupling problem in the approximate case. This method enables the compression of neural networks into two flexible layers.
引用
收藏
页码:226 / 230
页数:5
相关论文
共 50 条
  • [1] l1 Regularization in Two-Layer Neural Networks
    Li, Gen
    Gu, Yuantao
    Ding, Jie
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 135 - 139
  • [2] Provable Identifiability of Two-Layer ReLU Neural Networks via LASSO Regularization
    Li G.
    Wang G.
    Ding J.
    IEEE Transactions on Information Theory, 2023, 69 (09) : 5921 - 5935
  • [3] Compressing neural networks via formal methods
    Ressi, Dalila
    Romanello, Riccardo
    Rossi, Sabina
    Piazza, Carla
    NEURAL NETWORKS, 2024, 178
  • [4] Propagating interfaces in a two-layer bistable neural network
    Kazantsev, V. B.
    Nekorkin, V. I.
    Morfu, S.
    Bilbault, J. M.
    Marquié, P.
    INTERNATIONAL JOURNAL OF BIFURCATION AND CHAOS, 2006, 16 (03): : 589 - 600
  • [5] Application of two-layer neural networks in tracking control of n-link RLED robots
    Luo Weilin
    Lin Tong
    Zou Zaojian
    3RD CHINA-JAPAN CONFERENCE ON MECHATRONICS 2006 FUZHOU, 2006, : 11 - 17
  • [6] COMPRESSING DEEP NEURAL NETWORKS FOR EFFICIENT SPEECH ENHANCEMENT
    Tan, Ke
    Wang, DeLiang
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 8358 - 8362
  • [7] Compressing Deep Neural Networks for Recognizing Places
    Saha, Soham
    Varma, Girish
    Jawahar, C. V.
    PROCEEDINGS 2017 4TH IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION (ACPR), 2017, : 352 - 357
  • [8] Compressing Convolutional Neural Networks in the Frequency Domain
    Chen, Wenlin
    Wilson, James
    Tyree, Stephen
    Weinberger, Kilian Q.
    Chen, Yixin
    KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 1475 - 1484
  • [9] Delay-induced primary rhythmic behavior in a two-layer neural network
    Guo, Shangjiang
    Yuan, Yuan
    NEURAL NETWORKS, 2011, 24 (01) : 65 - 74
  • [10] Power flow state estimator using two-layer neural network structure
    Ayan, K
    Arifoglu, U
    ELECTRIC POWER SYSTEMS RESEARCH, 2004, 69 (2-3) : 249 - 258