A Loop-aware Autotuner for High-Precision Floating-point Applications

被引:1
作者
Gu, Ruidong [1 ]
Beata, Paul [1 ]
Becchi, Michela [1 ]
机构
[1] North Carolina State Univ, Dept Elect & Comp Engn, Raleigh, NC 27695 USA
来源
2020 IEEE INTERNATIONAL SYMPOSIUM ON PERFORMANCE ANALYSIS OF SYSTEMS AND SOFTWARE (ISPASS) | 2020年
基金
美国国家科学基金会;
关键词
autotuner; mixed-precision; floating-point;
D O I
10.1109/ISPASS48437.2020.00048
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Many scientific applications (e.g., molecular dynamics, climate modeling and astrophysical simulations) rely on floating-point arithmetic. Due to its approximate nature, the use of floating-point arithmetic can lead to inaccuracy and reproducibility issues, which can be particularly significant for long running applications. Indeed, previous work has shown that 64-bit IEEE floating-point arithmetic can be insufficient for many algorithms and applications, such as ill-conditioned linear systems, large summations, long-time or large-scale physical simulations, and experimental mathematics applications. To overcome these issues, existing work has proposed high-precision floating-point libraries (e.g., the GNU multiple precision arithmetic library), but these libraries come at the cost of significant execution time. In this work, we propose an auto-tuner for applications requiring high-precision floating-point arithmetic to deliver a prescribed level of accuracy. Our auto-tuner uses compiler analysis to discriminate operations and variables that require high-precision from those that can be handled using standard IEEE 64-bit floating-point arithmetic, and it generates a mixed precision program that trades off performance and accuracy by selectively using different precisions for different variables and operations. In particular, our auto-tuner leverages loop and data dependences analysis to quickly identify precision-sensitive variables and operations and provide results that are robust to different input datasets. We test our auto-tuner on a mix of applications with different computational patterns.
引用
收藏
页码:285 / 295
页数:11
相关论文
共 31 条
  • [1] Bailey D. H, 2018, APPL MATH COMPUT, V2012
  • [2] Bailey DH, 2001, MATH COMPUT, V70, P1719, DOI 10.1090/S0025-5718-00-01278-3
  • [3] Che SA, 2009, I S WORKL CHAR PROC, P44, DOI 10.1109/IISWC.2009.5306797
  • [4] Rigorous Floating-Point Mixed-Precision Tuning
    Chiang, Wei-Fan
    Baranowski, Mark
    Briggs, Ian
    Solovyev, Alexey
    Gopalakrishnan, Ganesh
    Rakamaric, Zvonimir
    [J]. ACM SIGPLAN NOTICES, 2017, 52 (01) : 300 - 315
  • [5] Towards a Compiler for Reals
    Darulova, Eva
    Kuncak, Viktor
    [J]. ACM TRANSACTIONS ON PROGRAMMING LANGUAGES AND SYSTEMS, 2017, 39 (02):
  • [6] Sound Compilation of Reals
    Darulova, Eva
    Kuncak, Viktor
    [J]. ACM SIGPLAN NOTICES, 2014, 49 (01) : 235 - 248
  • [7] Das D., ARXIV PREPRINT ARXIV
  • [8] Certifying the Floating-Point Implementation of an Elementary Function Using Gappa
    de Dinechin, Florent
    Lauter, Christoph
    Melquiond, Guillaume
    [J]. IEEE TRANSACTIONS ON COMPUTERS, 2011, 60 (02) : 242 - 253
  • [9] FLOATING-POINT TECHNIQUE FOR EXTENDING AVAILABLE PRECISION
    DEKKER, TJ
    [J]. NUMERISCHE MATHEMATIK, 1971, 18 (03) : 224 - +
  • [10] MPFR: A multiple-precision binary floating-point library with correct rounding
    Fousse, Laurent
    Hanrot, Guillaume
    Leflvre, Vincent
    Plissier, Patrick
    Zimmermann, Paul
    [J]. ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE, 2007, 33 (02):