An efficient stochastic computing based deep neural network accelerator with optimized activation functions

被引:2
作者
Bodiwala S. [1 ]
Nanavati N. [2 ]
机构
[1] Gujarat Technological University, Ahmedabad, Gujarat
[2] Sarvajanik College of Engineering and Technology, Surat, Gujarat
关键词
Accelerator; Custom computing; Deep neural network; Optimization; Stochastic computing;
D O I
10.1007/s41870-021-00682-2
中图分类号
学科分类号
摘要
Recently, Deep Neural Networks (DNNs) have played a major role in revolutionizing Artificial Intelligence (AI). These methods have made unprecedented progress in several recognition and detection tasks, achieving accuracy close to, or even better, than human perception. However, their complex architectures requires high computational resources, restricting their usage in embedded devices due to limited area and power. This paper considers Stochastic Computing (SC) as a novel computing paradigm to provide significantly low hardware footprint with high scalability. SC operates on random bit-streams where the signal value is encoded by the probability of a bit in the bit-stream being one. By this representation, SC allows the implementation of basic arithmetic operations including addition and multiplication with simple logic. Hence, SC have potential to implement parallel and scalable DNNs with reduced hardware footprint. A modified SC neuron architecture is proposed for training and implementation of DNNs. Our experimental results show improved classification accuracy on the MNIST dataset by 9.47% compared to the conventional binary computing approach. © 2021, Bharati Vidyapeeth's Institute of Computer Applications and Management.
引用
收藏
页码:1179 / 1192
页数:13
相关论文
共 27 条
[1]  
Goodfellow I., Bengio Y., Courville A., Deep learning, (2016)
[2]  
Abdel-Hamid O., Mohamed A.R., Jiang H., Et al., Convolutional neural networks for speech recognition, IEEE Trans Audio Speech Lang Process, 22, pp. 1533-1545, (2014)
[3]  
Wang Y., Luo Z., Jodoin P.-M., Interactive deep learning method for segmenting moving objects, Pattern Recognit Lett, 96, pp. 66-75, (2017)
[4]  
Ren A., Li Z., Ding C., Et al., SC-DCNN: Highly-scalable deep convolutional neural network using stochastic computing, ACM Sigplan Not, 52, pp. 405-418, (2017)
[5]  
Comparative study of CNN and RNN for natural language processing, Arxiv Prepr Arxiv170201923, (2017)
[6]  
Visvam Devadoss A.K., Thirulokachander V.R., Visvam Devadoss A.K., Efficient daily news platform generation using natural language processing, Int J Inf Technol, 11, pp. 295-311, (2019)
[7]  
Shiding Lin J.O., Qi W., Et al., SDA: Software-defined accelerator for large-scale DNN systems, 2014 IEEE Hot Chips 26 Symposium (HCS), pp. 1-23, (2014)
[8]  
Andri R., Cavigelli L., Rossi D., Benini L., YodaNN: an architecture for ultralow power binary-weight CNN acceleration, IEEE Trans Comput Des Integr Circuits Syst, 37, pp. 48-60, (2018)
[9]  
Alaghi A., Hayes J.P., Survey of stochastic computing, ACM Trans Embed Comput Syst, 12, 92, pp. 1-92, (2013)
[10]  
Gupta S., Agrawal A., Gopalakrishnan K., Narayanan P., Deep Learning with Limited Numerical Precision. International Conference on Machine Learning, pp. 1737-1746, (2015)