Hardware-Aware Softmax Approximation for Deep Neural Networks

被引:20
作者
Geng, Xue [1 ]
Lin, Jie [1 ]
Zhao, Bin [2 ]
Kong, Anmin [2 ]
Aly, Mohamed M. Sabry [3 ]
Chandrasekhar, Vijay [1 ]
机构
[1] ASTAR, I2R, Singapore, Singapore
[2] ASTAR, IME, Singapore, Singapore
[3] Nanyang Technol Univ, Sch CSE, Singapore, Singapore
来源
COMPUTER VISION - ACCV 2018, PT IV | 2019年 / 11364卷
关键词
Softmax; Nonlinear operation; Power; Area;
D O I
10.1007/978-3-030-20870-7_7
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
There has been a rapid development of custom hardware for accelerating the inference speed of deep neural networks (DNNs), by explicitly incorporating hardware metrics (e.g., area and energy) as additional constraints, in addition to application accuracy. Recent efforts mainly focused on linear functions (matrix multiplication) in convolutional (Conv) or fully connected (FC) layers, while there is no publicly available study on optimizing the inference of non-linear functions in DNNs, with hardware constraints. In this paper, we address the problem of cost-efficient inference for Softmax, a popular non-linear function in DNNs. We introduce a hardware-aware linear approximation framework by algorithm and hardware co-optimization, with the goal of minimizing the cost in terms of area and energy, without incurring significant loss in application accuracy. This is achieved by simultaneously reducing the operand bit-width and approximating cost-intensive operations in Softmax (e.g. exponential and division) with cost-effective operations (e.g. addition and bit shifts). We designed and synthesized a hardware unit for our approximation approach, to estimate the area and energy consumption. In addition, we introduce a training method to further save area and energy cost, by reduced precision. Our approach reduces area cost by 13x and energy consumption by 2x with 11-bit operand width, compared to baseline at 19-bit for VOC2007 dataset in Faster R-CNN.
引用
收藏
页码:107 / 122
页数:16
相关论文
共 31 条
[1]  
[Anonymous], 2016, ISCA
[2]  
[Anonymous], 2013, CORR
[3]  
[Anonymous], IEE P CIRCUITS DEVIC
[4]  
[Anonymous], CADENCE ENN SUMMIT
[5]  
[Anonymous], NORCHIP
[6]  
[Anonymous], 2016, NIPS
[7]  
[Anonymous], 2009, ISCAS
[8]  
[Anonymous], 2016, CVPR
[9]  
[Anonymous], 2016, NIPS
[10]  
[Anonymous], 2017, CVPR