ScaleHLS: A New Scalable High-Level Synthesis Framework on Multi-Level Intermediate Representation

被引:45
作者
Ye, Hanchen [1 ]
Hao, Cong [2 ]
Cheng, Jianyi [3 ]
Jeong, Hyunmin [1 ]
Huang, Jack [1 ]
Neuendorffer, Stephen [4 ]
Chen, Deming [1 ]
机构
[1] Univ Illinois, Urbana, IL 61801 USA
[2] Georgia Inst Technol, Atlanta, GA 30332 USA
[3] Imperial Coll London, London, England
[4] Xilinx Inc, San Jose, CA USA
来源
2022 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2022) | 2022年
关键词
High-Level Synthesis; MLIR; Compiler; FPGA; Optimization; Design Space Exploration;
D O I
10.1109/HPCA53966.2022.00060
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
High-level synthesis (HLS) has been widely adopted as it significantly improves the hardware design productivity and enables efficient design space exploration (DSE). Existing HLS tools are built using compiler infrastructures largely based on a single-level abstraction, such as LLVM. However, as HLS designs typically come with intrinsic structural or functional hierarchies, different HLS optimization problems are often better solved with different levels of abstractions. This paper proposes ScaIeHLS(1), a new scalable and customizable HLS framework, on top of a multi-level compiler infrastructure called MLIR. ScaleHLS represents HLS designs at multiple representation levels and provides an HLS-dedicated analysis and transform library to solve the optimization problems at the suitable levels. Using this library, we provide a DSE engine to generate optimized HLS designs automatically. In addition, we develop an HLS C front-end and a C/C++ emission back-end to translate HLS designs into/from MLIR for enabling an end-to-end compilation flow. Experimental results show that, comparing to the baseline designs without manual directives insertion and code-rewriting, that are only optimized by Xilinx Vivado HLS, ScaleHLS improves the performances with amazing quality-of-results - up to 768.1 x better on computation kernel level programs and up to 3825.0 x better on neural network models.
引用
收藏
页码:741 / 755
页数:15
相关论文
共 58 条
[1]  
Aho A. V., 1986, Addison wesley, V7, P9
[2]  
[Anonymous], 2009, Rep. TR-2009
[3]  
[Anonymous], 2012, Polybench: The Polyhedral Benchmark Suite
[4]  
[Anonymous], 2014, P INT S HIGH PERF PA
[5]   An updated set of Basic Linear Algebra Subprograms (BLAS) [J].
Blackford, LS ;
Demmel, J ;
Dongarra, J ;
Duff, I ;
Hammarling, S ;
Henry, G ;
Heroux, M ;
Kaufman, L ;
Lumsdaine, A ;
Petitet, A ;
Pozo, R ;
Remington, K ;
Whaley, RC .
ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2002, 28 (02) :135-151
[6]  
C. contributors, 2021, CIRCT CIRC IR COMP T
[7]   Cloud-DNN: An Open Framework for Mapping DNN Models to Cloud FPGAs [J].
Chen, Yao ;
He, Jiong ;
Zhang, Xiaofan ;
Hao, Cong ;
Chen, Deming .
PROCEEDINGS OF THE 2019 ACM/SIGDA INTERNATIONAL SYMPOSIUM ON FIELD-PROGRAMMABLE GATE ARRAYS (FPGA'19), 2019, :73-82
[8]  
Cilardo A, 2015, DES AUT TEST EUROPE, P163
[9]   High-Level Synthesis for FPGAs: From Prototyping to Deployment [J].
Cong, Jason ;
Liu, Bin ;
Neuendorffer, Stephen ;
Noguera, Juanjo ;
Vissers, Kees ;
Zhang, Zhiru .
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2011, 30 (04) :473-491
[10]  
contributors O., 2021, ONNX OP NEUR NETW EX