A Lightweight Posit Processing Unit for RISC-V Processors in Deep Neural Network Applications

被引:20
作者
Cococcioni, Marco [1 ]
Rossi, Federico [1 ]
Ruffaldi, Emanuele [2 ]
Saponara, Sergio [1 ]
机构
[1] Univ Pisa, Dept Informat Engn, I-56122 Pisa, Italy
[2] MMI SpA, I-56011 Calci, Italy
基金
欧盟地平线“2020”;
关键词
Alternative representations of real numbers; posit arithmetic; hardware synthesis; RISC-V processors; instruction set architecture extension; scalar operations;
D O I
10.1109/TETC.2021.3120538
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Nowadays, two groundbreaking factors are emerging in neural networks. First, there is the RISC-V open instruction set architecture (ISA) that allows a seamless implementation of custom instruction sets. Second, there are several novel formats for real number arithmetic. In this work, we combined these two key aspects using the very promising posit format, developing a light Posit Processing Unit (PPU-light). We present an extension of the base RISC-V ISA that allows the conversion between 8 or 16-bit posits and 32-bit IEEE Floats or fixed point formats in order to offer a compressed representation of real numbers with little-to-none accuracy degradation. Then we elaborate on the hardware and software toolchain integration of our PPU-light inside the Ariane RISC-V core and its toolchain, showing how little it impacts in terms of circuit complexity and power consumption. Indeed, only 0.36% of the circuit is devoted to the PPU-light while the full RISC-V core occupies the 33% of the overall circuit complexity. Finally we present the impact of our PPU-light on a deep neural network task, reporting speedups up to 10 on sample inference processing time.
引用
收藏
页码:1898 / 1908
页数:11
相关论文
共 25 条
[1]   DLFloat: A 16-b Floating Point format designed for Deep Learning Training and Inference [J].
Agrawal, Ankur ;
Mueller, Silvia M. ;
Fleischer, Bruce M. ;
Choi, Jungwook ;
Wang, Naigang ;
Sun, Xiao ;
Gopalakrishnan, Kailash .
2019 IEEE 26TH SYMPOSIUM ON COMPUTER ARITHMETIC (ARITH), 2019, :92-95
[2]  
[Anonymous], 2017, NIPS
[3]  
Asanovicand K., 2014, EECS2014146 UCB
[4]   Bfloat16 processing for Neural Networks [J].
Burgess, Neil ;
Stephens, Nigel ;
Milanovic, Jelena ;
Monachopolous, Konstantinos .
2019 IEEE 26TH SYMPOSIUM ON COMPUTER ARITHMETIC (ARITH), 2019, :88-91
[5]  
Carmichael Z, 2019, DES AUT TEST EUROPE, P1421, DOI [10.23919/date.2019.8715262, 10.23919/DATE.2019.8715262]
[6]   Novel Arithmetics in Deep Neural Networks Signal Processing for Autonomous Driving: Challenges and Opportunities [J].
Cococcioni, Marco ;
Rossi, Federico ;
Ruffaldi, Emanuele ;
Saponara, Sergio ;
de Dinechin, Benoit Dupont .
IEEE SIGNAL PROCESSING MAGAZINE, 2021, 38 (01) :97-110
[7]   Fast Approximations of Activation Functions in Deep Neural Networks when using Posit Arithmetic [J].
Cococcioni, Marco ;
Rossi, Federico ;
Ruffaldi, Emanuele ;
Saponara, Sergio .
SENSORS, 2020, 20 (05)
[8]  
Github, Spike RISC-V ISA Simulator
[9]  
gitlab, EUROPEAN PROCESSOR I
[10]  
Gustafson John L., 2017, [Supercomputing Frontiers and Innovations, Supercomputing Frontiers and Innovations], V4, P71