Design tradeoff analysis of floating-point adders in FPGAs

被引:7
作者
Malik, Ali [1 ]
Chen, Dongdong [1 ]
Choi, Younhee [1 ]
Lee, Moon Ho [2 ]
Ko, Seok-Bum [1 ]
机构
[1] Univ Saskatchewan, Dept Elect & Comp Engn, Saskatoon, SK S7N 5A9, Canada
[2] Chonbuk Natl Univ, Elect & Informat Engn Dept, Jeonju 561756, Jeonbuk, South Korea
来源
CANADIAN JOURNAL OF ELECTRICAL AND COMPUTER ENGINEERING-REVUE CANADIENNE DE GENIE ELECTRIQUE ET INFORMATIQUE | 2008年 / 33卷 / 3-4期
关键词
floating-point adder; FPGA;
D O I
10.1109/CJECE.2008.4721634
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With gate counts of ten million, field-programmable gate arrays (FPGAs) are becoming suitable for floating-point computations. Addition is the most complex operation in a floating-point unit and can cause major delay while requiring a significant area. Over the years, the VLSI community has developed many floating-point adder algorithms aimed primarily at reducing the overall latency. An efficient design of the floating-point adder offers major area and performance improvements for FPGAs. Given recent advances in FPGA architecture and area density, latency has become the main focus in attempts to improve performance. This paper studies the implementation of standard; leading-one predictor (LOP); and far and close datapath (2-path) floating-point addition algorithms in FPGAs. Each algorithm has complex sub-operations which contribute significantly to the overall latency of the design. Each of the sub-operations is researched for different implementations and is then synthesized onto a Xilinx Vertex-II Pro FPGA device. Standard and LOP algorithms are also pipelined into five stages and compared with the Xilinx IP. According to the results, the standard algorithm is the best implementation with respect to area, but has a large overall latency of 27.059 ns while occupying 541 slices. The LOP algorithm reduces latency by 6.5% at the cost of a 38% increase in area compared to the standard algorithm. The 2-path implementation shows a 19% reduction in latency with an added expense of 88% in area compared to the standard algorithm. The five-stage standard pipeline implementation shows a 6.4% improvement in clock speed compared to the Xilinx IP with a 23% smaller area requirement. The five-stage pipelined LOP implementation shows a 22% improvement in clock speed compared to the Xilinx IP at a cost of 15% more area.
引用
收藏
页码:169 / 175
页数:7
相关论文
共 22 条
[1]  
[Anonymous], 2001, ADV COMPUTER ARITHME
[2]  
Bruguera J. D., 2000, ROUNDING FLOATING PO
[3]   Leading-one prediction with concurrent position correction [J].
Bruguera, JD ;
Lang, T .
IEEE TRANSACTIONS ON COMPUTERS, 1999, 48 (10) :1083-1097
[4]  
*DFPAU, 2007, FLOAT POINT AR COPR
[5]  
FARMLAND M, 1981, THESIS STANFORD U ST
[6]  
FLYNN M, 1991, CSLTR91463 STANF U C
[7]  
GOVINDU G, 2000, P INT S PAR DISTR PR, P149
[8]  
Hennessy J. L, 2012, COMPUTER ARCHITECTUR
[9]  
*IEEE, 2002, 754 IEEE
[10]  
Koren I, 2002, Computer Arithmetic Algorithms