REDUCER: ELIMINATION OF REPETITIVE CODES FOR ACCELERATED ITERATIVE COMPILATION

被引:2
作者
Ahmed, Hameeza [1 ]
Ismail, Muhammad Ali [1 ]
机构
[1] NED Univ Engn & Technol, Dept Comp & Informat Syst Engn, Karachi, Pakistan
关键词
Iterative compilation; code redundancy; LLVM; IR; big data;
D O I
10.31577/cai_2021_3_543
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Low Level Virtual Machine (LLVM) is a widely adopted open source compiler providing numerous optimization opportunities. The discovery of the best optimization sequence in this large space is done via iterative compilation, which incurs substantial overheads, especially for big data applications operating on high volume and variety datasets. The large search space is mostly comprised of identical codes generated via different optimizations. However, no mechanism is implemented inside the LLVM compiler to suppress the redundant testings. In this regard, this paper proposes REDUCER for eliminating the identical code executions by performing Intermediate Representation (IR) level comparisons. REDUCER has been tested using the well-accepted MiCOMP technique in LLVM 3.8 and 9.0 compiler, with embedded (cBench) and big data workloads. In comparison to MiCOMP 19.5 k experiments, REDUCER lowers the experiment count up to 327, i.e. 98 %, and on average to 4 375, i.e. 77 Vo, for cBench (LLVM-3.8). Similarly, for LLVM-9.0 the reductions are up to 1 931, i.e. 90 Vo, and on average 5 863, i.e. 69.9 Vo. Due to the significant experiment reduction, for embedded workloads, the iterative compilation is up to 58.6x and on average 4.1x faster with REDUCER (LLVM-3.8) than Mi-COMP, whereas, with REDUCER (LLVM-9.0) the compilation is up to 8.5x and on average 2.9x faster. Moreover, REDUCER is found to be scalable and efficient for big data workloads where the iterative compilation is reduced to few days, as code is compared one time only for a single application tested on multiple datasets.
引用
收藏
页码:543 / 574
页数:32
相关论文
共 37 条
[1]  
Aho Alfred V., 2003, Compilers: Principles, Techniques and Tools, V2nd
[2]  
[Anonymous], 2019, GREP BENCH
[3]  
[Anonymous], 2019, C NEURAL NETWORK LIB
[4]  
[Anonymous], 1998, WORKSH PROF FEEDB DI
[5]  
[Anonymous], 2019, LLVM COMP INFR
[6]   A Survey on Compiler Autotuning using Machine Learning [J].
Ashouri, Amir H. ;
Killian, William ;
Cavazos, John ;
Palermo, Gianluca ;
Silvano, Cristina .
ACM COMPUTING SURVEYS, 2019, 51 (05)
[7]   MiCOMP: Mitigating the Compiler Phase-Ordering Problem Using Optimization Sub-Sequences and Machine Learning [J].
Ashouri, Amir H. ;
Bignoli, Andrea ;
Palermo, Gianluca ;
Silvano, Cristina ;
Kulkarni, Sameer ;
Cavazos, John .
ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2017, 14 (03)
[8]   COBAYN: Compiler Autotuning Framework Using Bayesian Networks [J].
Ashouri, Amir Hossein ;
Mariani, Giovanni ;
Palermo, Gianluca ;
Park, Eunjung ;
Cavazos, John ;
Silvano, Cristina .
ACM TRANSACTIONS ON ARCHITECTURE AND CODE OPTIMIZATION, 2016, 13 (02)
[9]   Analyzing the Influence of LLVM Code Optimization Passes on Software Performance [J].
Carlos de la Torre, Juan ;
Ruiz, Patricia ;
Dorronsoro, Bernabe ;
Galindo, Pedro L. .
INFORMATION PROCESSING AND MANAGEMENT OF UNCERTAINTY IN KNOWLEDGE-BASED SYSTEMS: APPLICATIONS, IPMU 2018, PT III, 2018, 855 :272-283
[10]  
Che SA, 2009, I S WORKL CHAR PROC, P44, DOI 10.1109/IISWC.2009.5306797