PUMA: A Programmable Ultra-efficient Memristor-based Accelerator for Machine Learning Inference

被引:302
作者
Ankit, Aayush [1 ,2 ]
El Hajj, Izzat [3 ,4 ]
Chalamalasetti, Sai Rahul [2 ]
Ndu, Geoffrey [2 ]
Foltin, Martin [2 ]
Williams, R. Stanley [2 ]
Faraboschi, Paolo [2 ]
Hwu, Wen-mei [4 ]
Strachan, John Paul [2 ]
Roy, Kaushik [1 ]
Milojicic, Dejan S. [2 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
[2] Hewlett Packard Enterprise, San Jose, CA 95002 USA
[3] Amer Univ Beirut, Beirut, Lebanon
[4] Univ Illinois, Champaign, IL USA
来源
TWENTY-FOURTH INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXIV) | 2019年
关键词
memristors; accelerators; machine learning; neural networks; MEMORY; SCALE; COPROCESSOR;
D O I
10.1145/3297858.3304049
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Memristor crossbars are circuits capable of performing analog matrix-vector multiplications, overcoming the fundamental energy efficiency limitations of digital logic. They have been shown to be effective in special-purpose accelerators for a limited set of neural network applications. We present the Programmable Ultra-efficient Memristor-based Accelerator (PUMA) which enhances memristor crossbars with general purpose execution units to enable the acceleration of a wide variety of Machine Learning (ML) inference workloads. PUMA's microarchitecture techniques exposed through a specialized Instruction Set Architecture (ISA) retain the efficiency of in-memory computing and analog circuitry, without compromising programmability. We also present the PUMA compiler which translates high-level code to PUMA ISA. The compiler partitions the computational graph and optimizes instruction scheduling and register allocation to generate code for large and complex workloads to run on thousands of spatial cores. We have developed a detailed architecture simulator that incorporates the functionality, timing, and power models of PUMA's components to evaluate performance and energy consumption. A PUMA accelerator running at 1 GHz can reach area and power efficiency of 577 GOPS/s/mm(2) and 837 GOPS/s/W, respectively. Our evaluation of diverse ML applications from image recognition, machine translation, and language modelling (5M-800M synapses) shows that PUMA achieves up to 2,446x energy and 66x latency improvement for inference compared to state-of-the-art GPUs. Compared to an application-specific memristor-based accelerator, PUMA incurs small energy overheads at similar inference latency and added programmability.
引用
收藏
页码:715 / 731
页数:17
相关论文
共 116 条
[1]   Bit-Pragmatic Deep Neural Network Computing [J].
Albericio, Jorge ;
Delmas, Alberto ;
Judd, Patrick ;
Sharify, Sayeh ;
O'Leary, Gerard ;
Genov, Roman ;
Moshovos, Andreas .
50TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO), 2017, :382-394
[2]   Cnvlutin: Ineffectual-Neuron-Free Deep Neural Network Computing [J].
Albericio, Jorge ;
Judd, Patrick ;
Hetherington, Tayler ;
Aamodt, Tor ;
Jerger, Natalie Enright ;
Moshovos, Andreas .
2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, :1-13
[3]   Pattern classification by memristive crossbar circuits using ex situ and in situ training [J].
Alibart, Fabien ;
Zamanidoost, Elham ;
Strukov, Dmitri B. .
NATURE COMMUNICATIONS, 2013, 4
[4]  
Alwani M., 2016, P 49 ANN IEEE ACM IN, P1, DOI DOI 10.1109/MICRO.2016.7783725
[5]  
Ambrosi Joao, 2018, REB COMP ICRC 2018 I
[6]  
Andri R., 2017, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems PP, P1
[7]   RESPARC: A Reconfigurable and Energy-Efficient Architecture with Memristive Crossbars for Deep Spiking Neural Networks [J].
Ankit, Aayush ;
Sengupta, Abhronil ;
Panda, Priyadarshini ;
Roy, Kaushik .
PROCEEDINGS OF THE 2017 54TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2017,
[8]  
Ankit A, 2017, ICCAD-IEEE ACM INT, P533, DOI 10.1109/ICCAD.2017.8203823
[9]  
[Anonymous], 2017, NATURE NANOTECHNOLOG
[10]  
[Anonymous], 1996, APPL LINEAR STAT MOD