DeepShift: Towards Multiplication-Less Neural Networks

被引:59
作者
Elhoushi, Mostafa [1 ]
Chen, Zihao [1 ,2 ]
Shafiq, Farhan [1 ]
Tian, Ye Henry [1 ]
Li, Joey Yiwei [1 ]
机构
[1] Huawei Technol, Markham, ON, Canada
[2] Univ Toronto, Toronto, ON, Canada
来源
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021 | 2021年
关键词
D O I
10.1109/CVPRW53098.2021.00268
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The high computation, memory, and power budgets of inferring convolutional neural networks (CNNs) are major bottlenecks of model deployment to edge computing platforms, e.g., mobile devices and IoT. Moreover, training CNNs is time and energy-intensive even on high-grade servers. Convolution layers and fully connected layers, because of their intense use of multiplications, are the dominant contributor to this computation budget. We propose to alleviate this problem by introducing two new operations: convolutional shifts and fully-connected shifts which replace multiplications with bitwise shift and sign flipping during both training and inference. During inference, both approaches require only 5 bits (or less) to represent the weights. This family of neural network architectures (that use convolutional shifts and fully connected shifts) is referred to as DeepShift models. We propose two methods to train DeepShift models: DeepShift-Q which trains regular weights constrained to powers of 2, and DeepShift-PS that trains the values of the shifts and sign flips directly. Very close accuracy, and in some cases higher accuracy, to baselines are achieved. Converting pre-trained 32bit floating-point baseline models of ResNet18, ResNet50, VGG16, and GoogleNet to DeepShift and training them for 15 to 30 epochs, resulted in Top-1/Top-5 accuracies higher than that of the original model. Last but not least, we implemented the convolutional shifts and fully connected shift GPU kernels and showed a reduction in latency time of 25% when inferring ResNet18 compared to unoptimized multiplication-based GPU kernels. The code can be found at https://github.com/mostafaelhoushi/DeepShift.
引用
收藏
页码:2359 / 2368
页数:10
相关论文
共 39 条
[1]  
[Anonymous], 2015, TETRAHEDRON
[2]  
Asati Abhijit Rameshwar, 2009, THESIS, P7
[3]   Regularized Random Walk Ranking for Co-Saliency Detection in images [J].
Bardhan, Sayanti ;
Jacob, Shibu .
ELEVENTH INDIAN CONFERENCE ON COMPUTER VISION, GRAPHICS AND IMAGE PROCESSING (ICVGIP 2018), 2018,
[4]  
Bonnaerens Maxim, 2019, TECHNICAL REPORT
[5]   A Deep Look into Logarithmic Quantization of Model Parameters in Neural Networks [J].
Cai, Jingyong ;
Takemoto, Masashi ;
Nakajo, Hironori .
PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON ADVANCES IN INFORMATION TECHNOLOGY (IAIT2018), 2018,
[6]   AdderNet: Do We Really Need Multiplications in Deep Learning? [J].
Chen, Hanting ;
Wang, Yunhe ;
Xu, Chunjing ;
Shi, Boxin ;
Xu, Chao ;
Tian, Qi ;
Xu, Chang .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :1465-1474
[7]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[8]  
Dhar Sauptik, 2019, ON DEVICE MACHINE LE
[9]  
Elkerdawy S, 2019, IEEE IMAGE PROC, P4290, DOI [10.1109/ICIP.2019.8803544, 10.1109/icip.2019.8803544]
[10]  
Fog Agner, 2018, TECHNICAL REPORT