Efficient and Effective Sparse LSTM on FPGA with Bank-Balanced Sparsity

被引:127
作者
Cao, Shijie [1 ]
Zhang, Chen [2 ]
Yao, Zhuliang [3 ]
Xiao, Wencong [4 ]
Nie, Lanshun [1 ]
Zhan, Dechen [1 ]
Liu, Yunxin [2 ]
Wu, Ming [2 ]
Zhang, Lintao [2 ]
机构
[1] Harbin Inst Technol, Harbin, Peoples R China
[2] Microsoft Res, Redmond, WA USA
[3] Tsinghua Univ, Beijing, Peoples R China
[4] Beihang Univ, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 2019 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS (FPGA'19) | 2019年
关键词
FPGA; Deep Neural Networks; LSTM; Weight Pruning; Inference; Bank-Balanced Sparsity;
D O I
10.1145/3289602.3293898
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Neural networks based on Long Short-Term Memory (LSTM) are widely deployed in latency-sensitive language and speech applications. To speed up LSTM inference, previous research proposes weight pruning techniques to reduce computational cost. Unfortunately, irregular computation and memory accesses in unrestricted sparse LSTM limit the realizable parallelism, especially when implemented on FPGA. To address this issue, some researchers propose block-based sparsity patterns to increase the regularity of sparse weight matrices, but these approaches suffer from deteriorated prediction accuracy. This work presents Bank-Balanced Sparsity (BBS), a novel sparsity pattern that can maintain model accuracy at a high sparsity level while still enable an efficient FPGA implementation. BBS partitions each weight matrix row into banks for parallel computing, while adopts fine-grained pruning inside each bank to maintain model accuracy. We develop a 3-step software-hardware co-optimization approach to apply BBS in real FPGA hardware. First, we propose a bank-balanced pruning method to induce the BBS pattern on weight matrices. Then we introduce a decoding-free sparse matrix format, Compressed Sparse Banks (CSB), that transparently exposes inter-bank parallelism in BBS to hardware. Finally, we design an FPGA accelerator that takes advantage of BBS to eliminate irregular computation and memory accesses. Implemented on Intel Arria-10 FPGA, the BBS accelerator can achieve 750.9 GOPs on sparse LSTM networks with a batch size of 1. Compared to stateof-the-art FPGA accelerators for LSTM with different compression techniques, the BBS accelerator achieves 2.3 similar to 3.7x improvement on energy efficiency and 7.0 similar to 34.4x reduction on latency with negligible loss of model accuracy.
引用
收藏
页码:63 / 72
页数:10
相关论文
共 30 条
  • [1] [Anonymous], 2018, ARXIV180208435
  • [2] [Anonymous], 2018 ACM IEEE 45 ANN
  • [3] [Anonymous], ARXIV181100206
  • [4] [Anonymous], 2008, NVIDIA Technical Report
  • [5] [Anonymous], NIST speech disc 1-1.1
  • [6] [Anonymous], 2016, P C ASS MACH TRANSL
  • [7] [Anonymous], 1999, TREEBANK 3 LDC99T42
  • [8] [Anonymous], 2014, 15 ANN C INT SPEECH
  • [9] [Anonymous], 49 ANN IEEE ACM INT
  • [10] [Anonymous], 2016, Mathematical Problems in Engineering