Improving efficiency in convolutional neural networks with multilinear filters

被引:31
作者
Dat Thanh Tran [1 ]
Iosifidis, Alexandros [2 ]
Gabbouj, Moncef [1 ]
机构
[1] Tampere Univ Technol, Signal Proc Lab, Tampere, Finland
[2] Aarhus Univ, Dept Engn Elect & Comp Engn, Aarhus, Denmark
基金
芬兰科学院;
关键词
Convolutional neural networks; Multilinear projection; Network compression; DISCRIMINANT-ANALYSIS; TENSOR;
D O I
10.1016/j.neunet.2018.05.017
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The excellent performance of deep neural networks has enabled us to solve several automatization problems, opening an era of autonomous devices. However, current deep net architectures are heavy with millions of parameters and require billions of floating point operations. Several works have been developed to compress a pre-trained deep network to reduce memory footprint and, possibly, computation. Instead of compressing a pre-trained network, in this work, we propose a generic neural network layer structure employing multilinear projection as the primary feature extractor. The proposed architecture requires several times less memory as compared to the traditional Convolutional Neural Networks (CNN), while inherits the similar design principles of a CNN. In addition, the proposed architecture is equipped with two computation schemes that enable computation reduction or scalability. Experimental results show the effectiveness of our compact projection that outperforms traditional CNN, while requiring far fewer parameters. (c) 2018 Elsevier Ltd. All rights reserved.
引用
收藏
页码:328 / 339
页数:12
相关论文
共 60 条
  • [1] Abadi M., 2015, TensorFlow: Large-scale machine learning on heterogeneous systems.
  • [2] A Deep Learning Method for Classification of EEG Data Based on Motor Imagery
    An, Xiu
    Kuang, Deping
    Guo, Xiaojiao
    Zhao, Yilu
    He, Lianghua
    [J]. INTELLIGENT COMPUTING IN BIOINFORMATICS, 2014, 8590 : 203 - 210
  • [3] [Anonymous], 2015, ARXIV151106744
  • [4] [Anonymous], 2015, ARXIV151000149
  • [5] [Anonymous], PROC CVPR IEEE
  • [6] [Anonymous], 2015, ARXIV PREPRINT ARXIV
  • [7] [Anonymous], 2016, P IJCAI
  • [8] [Anonymous], DESIGN EFFICIENT CON
  • [9] [Anonymous], 2013, PROC 30 INT C MACH L
  • [10] [Anonymous], 2013, NIPS