SIGMA: A Sparse and Irregular GEMM Accelerator with Flexible Interconnects for DNN Training

被引:327
作者
Qin, Eric [1 ]
Samajdar, Ananda [1 ]
Kwon, Hyoukjun [1 ]
Nadella, Vineet [1 ]
Srinivasan, Sudarshan [2 ]
Das, Dipankar [2 ]
Kaul, Bharat [2 ]
Krishna, Tushar [1 ]
机构
[1] Georgia Inst Technol, Atlanta, GA 30332 USA
[2] Intel, Santa Clara, CA USA
来源
2020 IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2020) | 2020年
关键词
D O I
10.1109/HPCA47549.2020.00015
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The advent of Deep Learning (DL) has radically transformed the computing industry across the entire spectrum from algorithms to circuits. As myriad application domains embrace DL, it has become synonymous with a genre of workloads across vision, speech, language, recommendations, robotics, and games. The key compute kernel within most DL workloads is general matrix-matrix multiplications (GEMMs), which appears frequently during both the forward pass (inference and training) and backward pass (training). GEMMs are a natural choice for hardware acceleration to speed up training, and have led to 2D systolic architectures like NVIDIA tensor cores and Google Tensor Processing Unit (TPU). Unfortunately, emerging GEMMs in DL are highly irregular and sparse, which lead to poor data mappings on systolic architectures. This paper proposes SIGMA, a flexible and scalable architecture that offers high utilization of all its processing elements (PEs) regardless of kernel shape and sparsity. Within SIGMA includes a novel reduction tree microarchitecture named Forwarding Adder Network (FAN). SIGMA performs 5.7 x better than systolic array architectures for irregular sparse matrices, and roughly 3x better than state-of-the-art sparse accelerators. We demonstrate an instance of SIGMA operating at 10.8 TFLOPS efficiency across arbitrary levels of sparsity, with a 65.10 mm(2) and 22.33 W footprint on a 28 nm process.
引用
收藏
页码:58 / 70
页数:13
相关论文
共 42 条
[1]  
[Anonymous], 2018, DEEP LEARNING INFERE
[2]  
[Anonymous], 2016, ISCA
[3]  
[Anonymous], 2016, MICRO
[4]  
[Anonymous], ARXIV190209574V1CSLG
[5]  
[Anonymous], ISCA
[6]  
[Anonymous], 2018, MICRO
[7]  
[Anonymous], 2016, ISCA
[8]  
[Anonymous], MICRO
[9]  
[Anonymous], ARXIV181102883V1CSDC
[10]  
[Anonymous], 2018, ASPLOS