Nanoscale Accelerators for Artificial Neural Networks

被引:3
作者
Niknia, Farzad [1 ]
Wang, Ziheng [1 ]
Liu, Shanshan [2 ]
Louri, Ahmed [3 ]
Lombardi, Fabrizio [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02215 USA
[2] New Mexico State Univ, Klipsch Sch Elect & Comp Engn, Las Cruces, NM 88001 USA
[3] George Washington Univ, Dept Elect & Comp Engn, Washington, DC 20052 USA
关键词
Accelerator; artificial neural network; ASIC; fixed-point; floating-point; multilayer perceptron; multiply accumulation; nanotechnology; stochastic computing;
D O I
10.1109/MNANO.2022.3208757
中图分类号
TB3 [工程材料学];
学科分类号
0805 ; 080502 ;
摘要
Artificial neural networks (ANNs) are usually implemented in accelerators to achieve efficient processing of inference; the hardware implementation of an ANN accelerator requires careful consideration on overhead metrics (such as delay, energy and area) and performance (usually measured by the accuracy). This paper considers the ASIC-based accelerator from arithmetic design considerations. The feasibility of using different schemes (parallel, serial and hybrid arrangements) and different types of arithmetic computing (floating-point, fixed-point and stochastic computing) when implementing multilayer perceptrons (MLPs) are considered. The evaluation results of MLPs for two popular datasets show that the floating-point/fixed-point-based parallel (hybrid) design achieves the smallest latency (area) and the SC-based design offers the lowest energy dissipation. © 2007-2011 IEEE.
引用
收藏
页码:14 / 21
页数:8
相关论文
共 28 条
[1]  
Zhang Q., Yu H., Barbiero M., Wang B., Gu M., Artificial neural networks enabled by nanophotonics, Light: Sci. Appl., 8, 42, pp. 1-14, (2019)
[2]  
Kim D., Kung J., Mukhopadhyay S., A power-aware digital multilayer perceptron accelerator with on-chip training based on approximate computing, IEEE Trans. Emerg. Topics Comput., 5, 2, pp. 164-178, (2017)
[3]  
Abiodun O.I., Jantan A., Omolara A.E., Dada K.V., Mohamed N.A., Arshad H., State-of-the-art in artificial neural network applications: A survey, Heliyon, 4, 11, pp. 1-41, (2018)
[4]  
Abiodun O.I., Et al., Comprehensive review of artif icial neural network applications to pattern recognition, IEEE Access, 7, pp. 158820-158846, (2019)
[5]  
Tka M., Verner R., Artificial neural networks in business: Two decades of research, Appl. Soft Comput., 38, pp. 788-804, (2016)
[6]  
Marugan A.P., Marquez F.P.G., Perez J.M.P., Hernandez D.R., A survey of artificial neural network in wind energy systems, Appl. Energy, 228, pp. 1822-1836, (2018)
[7]  
Elobaid L.M., Abdelsalam A.K., Zakzouk E.E., Artif icial neural network-based photovoltaic maximum power point tracking techniques: A survey, IET Renewable Power Gener., 9, 8, pp. 1043-1063, (2015)
[8]  
Sze V., Chen Y.H., Yang T.J., Emer J.S., Efficient processing of deep neural networks: A tutorial and survey, Proc. IEEE, 105, 12, pp. 2295-2329, (2017)
[9]  
Niknia F., Wang Z., Liu S., Lombardi F., Nanoscale design of multi-layer perceptrons using floating-point arithmetic units, Proc. IEEE 22nd Int. Conf. Nanotechnol., pp. 1-6, (2022)
[10]  
Talib M.A., Majzoub S., Nasir Q., Jamal D., A systematic literature review on hardware implementation of artificial intelligence algorithms, J. Supercomput., 77, 2, pp. 1897-1938, (2020)