Design Optimization for High-Performance Computing Using FPGA

被引:0
作者
Isik, Murat [1 ]
Inadagbo, Kayode [2 ]
Aktas, Hakan [3 ]
机构
[1] Drexel Univ, Elect & Comp Engn Dept, Philadelphia, PA 19104 USA
[2] A&M Univ, Elect & Comp Engn Dept, Prairie View, TX USA
[3] Omer Halisdemir Univ, Comp Engn Dept, Nigde, Turkiye
来源
INFORMATION MANAGEMENT AND BIG DATA, SIMBIG 2023 | 2024年 / 2142卷
关键词
High-performance computing; Tensil AI; Design optimization; FPGA; Open-source inference accelerator;
D O I
10.1007/978-3-031-63616-5_11
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Reconfigurable architectures like Field Programmable Gate Arrays (FPGAs) have been used for accelerating computations in several domains because of their unique combination of flexibility, performance, and power efficiency. However, FPGAs have not been widely used for high-performance computing, primarily because of their programming complexity and difficulties in optimizing performance. We optimize Tensil AI's open-source inference accelerator for maximum performance using ResNet20 trained on CIFAR in this paper in order to gain insight into the use of FPGAs for high-performance computing. In this paper, we show how improving hardware design, using Xilinx Ultra RAM, and using advanced compiler strategies can lead to improved inference performance. We also demonstrate that running the CIFAR test data set shows very little accuracy drop when rounding down from the original 32bit floating point. The heterogeneous computing model in our platform allows us to achieve a frame rate of 293.58 frames per second (FPS) and a %90 accuracy on a ResNet20 trained using CIFAR. The experimental results show that the proposed accelerator achieves a throughput of 21.12 Giga-Operations Per Second (GOP/s) with a 5.21W on-chip power consumption at 100 MHz. The comparison results with off-the-shelf devices and recent state-of-the-art implementations illustrate that the proposed accelerator has obvious advantages in terms of energy efficiency.
引用
收藏
页码:142 / 156
页数:15
相关论文
共 30 条
[1]   A Survey and Taxonomy of FPGA-based Deep Learning Accelerators [J].
Blaiech, Ahmed Ghazi ;
Ben Khalifa, Khaled ;
Valderrama, Carlos ;
Fernandes, Marcelo A. C. ;
Bedoui, Mohamed Hedi .
JOURNAL OF SYSTEMS ARCHITECTURE, 2019, 98 :331-345
[2]   FINN-R: An End-to-End Deep-Learning Framework for Fast Exploration of Quantized Neural Networks [J].
Blott, Michaela ;
Preusser, Thomas B. ;
Fraser, Nicholas J. ;
Gambardella, Giulio ;
O'Brien, Kenneth ;
Umuroglu, Yaman ;
Leeser, Miriam ;
Vissers, Kees .
ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2018, 11 (03)
[3]  
Chen Z, 2022, Arxiv, DOI arXiv:2212.04736
[4]   Nengo and Low-Power AI Hardware for Robust, Embedded Neurorobotics [J].
DeWolf, Travis ;
Jaworski, Pawel ;
Eliasmith, Chris .
FRONTIERS IN NEUROROBOTICS, 2020, 14
[5]  
Github, Tensil AI
[6]   Automatic Optimization of the Computation Graph in the Nengo Neural Network Simulator [J].
Gosmann, Jan ;
Eliasmith, Chris .
FRONTIERS IN NEUROINFORMATICS, 2017, 11
[7]  
Hong Wang, 2019, 2019 IEEE International Conference on Integrated Circuits, Technologies and Applications (ICTA), P61, DOI 10.1109/ICTA48799.2019.9012821
[8]  
Huang ST, 2019, IEEE HIGH PERF EXTR
[9]  
Inadagbo K, 2023, Arxiv, DOI arXiv:2307.07914
[10]  
Isik M., 2023, arXiv