Accuracy, Cost, and Performance Tradeoffs for Floating-Point Accumulation

被引:0
|
作者
Nagar, Krishna K. [1 ]
Bakos, Jason D. [1 ]
机构
[1] Univ S Carolina, Dept Comp Sci & Engn, Columbia, SC 29208 USA
来源
2013 INTERNATIONAL CONFERENCE ON RECONFIGURABLE COMPUTING AND FPGAS (RECONFIG) | 2013年
关键词
Computer arithmetic; Floating point accumulation; Rounding errors; Numerical accuracy; Compensated summation; FPGA; PRECISION;
D O I
暂无
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Set-wise floating point accumulation is a fundamental operation in scientific computing, but it presents design challenges such as data hazard between the output and input of the deeply pipelined floating point adder and numerical accuracy of results. Streaming reduction architectures on FPGAs generally do not consider the floating point error, which can become a significant factor due to the dynamic nature of reduction architectures and the inherent roundoff error and non-associativity of floating-point addition. In this paper we two frameworks using our existing reduction circuit architecture based on compensated summation for improving accuracy of results. We find that both these implementations provide almost 50% exact results for most of the datasets and relative error is less than that for the reduction circuit. These designs require more than twice the resources and operate at less frequency when compared to the original reduction circuit.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Accurate Parallel Floating-Point Accumulation
    Kadric, Edin
    Gurniak, Paul
    DeHon, Andre
    2013 21ST IEEE SYMPOSIUM ON COMPUTER ARITHMETIC (ARITH), 2013, : 153 - 162
  • [2] Accurate Parallel Floating-Point Accumulation
    Kadric, Edin
    Gurniak, Paul
    DeHon, Andre
    IEEE TRANSACTIONS ON COMPUTERS, 2016, 65 (11) : 3224 - 3238
  • [3] Hybrid Hardware/Software Floating-Point Implementations for Optimized Area and Throughput Tradeoffs
    Pimentel, Jon J.
    Bohnenstiehl, Brent
    Baas, Bevan M.
    IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, 2017, 25 (01) : 100 - 113
  • [4] On floating-point summation
    Espelid, TO
    SIAM REVIEW, 1995, 37 (04) : 603 - 607
  • [5] Improving Floating-Point Performance in Less Area: Fractured Floating Point Units (FFPUs)
    Neil Hockert
    Katherine Compton
    Journal of Signal Processing Systems, 2012, 67 : 31 - 46
  • [6] Improving Floating-Point Performance in Less Area: Fractured Floating Point Units (FFPUs)
    Hockert, Neil
    Compton, Katherine
    JOURNAL OF SIGNAL PROCESSING SYSTEMS FOR SIGNAL IMAGE AND VIDEO TECHNOLOGY, 2012, 67 (01): : 31 - 46
  • [7] Open source high performance floating-point modules
    Hemmert, K. Scott
    Underwood, Keith D.
    FCCM 2006: 14TH ANNUAL IEEE SYMPOSIUM ON FIELD-PROGRAMMABLE CUSTOM COMPUTING MACHINES, PROCEEDINGS, 2006, : 349 - +
  • [8] FLOATING-POINT NUMBERS WITH ERROR-ESTIMATES
    MASOTTI, G
    COMPUTER-AIDED DESIGN, 1993, 25 (09) : 524 - 538
  • [9] Managing the Performance/Error Tradeoff of Floating-point Intensive Applications
    Medhat, Ramy
    Lam, Michael O.
    Rountree, Barry L.
    Bonakdarpour, Borzoo
    Fischmeister, Sebastian
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2017, 16
  • [10] Improving FDTD Algorithm Performance using Block Floating-Point
    Pijetlovic, Stefan
    Subotic, Milos
    Pjevalica, Nebojsa
    2017 25TH TELECOMMUNICATION FORUM (TELFOR), 2017, : 518 - 521