CIRCNN: Accelerating and Compressing Deep Neural Networks Using Block-Circulant Weight Matrices

被引:188
作者
Ding, Caiwen [1 ]
Liao, Siyu [2 ]
Wang, Yanzhi [1 ]
Li, Zhe [1 ]
Liu, Ning [1 ]
Zhuo, Youwei [3 ]
Wang, Chao [3 ]
Qian, Xuehai [3 ]
Bai, Yu [4 ]
Yuan, Geng [1 ]
Ma, Xiaolong [1 ]
Zhang, Yipeng [1 ]
Tang, Jian [1 ]
Qiu, Qinru [1 ]
Lin, Xue [5 ]
Yuan, Bo [2 ]
机构
[1] Syracuse Univ, Syracuse, NY 13244 USA
[2] CUNY City Coll, New York, NY 10031 USA
[3] Univ Southern Calif, Los Angeles, CA USA
[4] Calif State Univ Fullerton, Fullerton, CA 92634 USA
[5] Northeastern Univ, Boston, MA 02115 USA
来源
50TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO) | 2017年
基金
美国国家科学基金会;
关键词
Deep learning; block-circulant matrix; compression; acceleration; FPGA; FFT; ARCHITECTURES; DESIGN;
D O I
10.1145/3123939.3124552
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Large-scale deep neural networks (DNNs) are both compute and memory intensive. As the size of DNNs continues to grow, it is critical to improve the energy efficiency and performance while maintaining accuracy. For DNNs, the model size is an important factor affecting performance, scalability and energy efficiency. Weight pruning achieves good compression ratios but suffers from three drawbacks: 1) the irregular network structure after pruning, which affects performance and throughput; 2) the increased training complexity; and 3) the lack of rigirous guarantee of compression ratio and inference accuracy. To overcome these limitations, this paper proposes CIRCNN, a principled approach to represent weights and process neural networks using block-circulant matrices. CIRCNN utilizes the Fast Fourier Transform (FFT)-based fast multiplication, simultaneously reducing the computational complexity (both in inference and training) from O(n(2)) to O(n logn) and the storage complexity from O(n2) to O(n), with negligible accuracy loss. Compared to other approaches, CIRCNN is distinct due to its mathematical rigor: the DNNs based on CIRCNN can converge to the same "effectiveness" as DNNs without compression. We propose the CIRCNN architecture, a universal DNN inference engine that can be implemented in various hardware/software platforms with configurable network architecture (e.g., layer type, size, scales, etc.). In CIRCNN architecture: 1) Due to the recursive property, FFT can be used as the key computing kernel, which ensures universal and small-footprint implementations. 2) The compressed but regular network structure avoids the pitfalls of the network pruning and facilitates high performance and throughput with highly pipelined and parallel design. To demonstrate the performance and energy efficiency, we test CIRCNN in FPGA, ASIC and embedded processors. Our results show that CIRCNN architecture achieves very high energy efficiency and performance with a small hardware footprint. Based on the FPGA implementation and ASIC synthesis results, CIRCNN achieves 6 - 102X energy efficiency improvements compared with the best state-of-the-art results.
引用
收藏
页码:395 / 408
页数:14
相关论文
共 75 条
[51]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[52]  
Krizhevsky Alex, 2009, THESIS, DOI DOI 10.1561/2200000056
[53]  
Le QV, 2013, INT CONF ACOUST SPEE, P8595, DOI 10.1109/ICASSP.2013.6639343
[54]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324
[55]  
Mahajan D, 2016, INT S HIGH PERF COMP, P14, DOI 10.1109/HPCA.2016.7446050
[56]  
Moons B, 2017, ISSCC DIG TECH PAP I, P246, DOI 10.1109/ISSCC.2017.7870353
[57]   The Hahn-Banach theorem: The life and times [J].
Narici, L ;
Beckenstein, E .
TOPOLOGY AND ITS APPLICATIONS, 1997, 77 (02) :193-211
[58]   Going Deeper with Embedded FPGA Platform for Convolutional Neural Network [J].
Qiu, Jiantao ;
Wang, Jie ;
Yao, Song ;
Guo, Kaiyuan ;
Li, Boxun ;
Zhou, Erjin ;
Yu, Jincheng ;
Tang, Tianqi ;
Xu, Ningyi ;
Song, Sen ;
Wang, Yu ;
Yang, Huazhong .
PROCEEDINGS OF THE 2016 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS (FPGA'16), 2016, :26-35
[59]   Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators [J].
Reagen, Brandon ;
Whatmough, Paul ;
Adolf, Robert ;
Rama, Saketh ;
Lee, Hyunkwang ;
Lee, Sae Kyu ;
Miguel Hernandez-Lobato, Jose ;
Wei, Gu-Yeon ;
Brooks, David .
2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, :267-278
[60]   Pipelined Architectures for Real-Valued FFT and Hermitian-Symmetric IFFT With Real Datapaths [J].
Salehi, Sayed Ahmad ;
Amirfattahi, Rasoul ;
Parhi, Keshab K. .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2013, 60 (08) :507-511