Analysis of Posit and Bfloat Arithmetic of Real Numbers for Machine Learning

被引:8
作者
Romanov, Aleksandr Yu [1 ]
Stempkovsky, Alexander L. [2 ]
Lariushkin, Ilia, V [3 ]
Novoselov, Georgy E. [4 ]
Solovyev, Roman A. [2 ]
Starykh, Vladimir A. [1 ]
Romanova, Irina I. [1 ]
Telpukhov, Dmitry, V [2 ]
Mkrtchan, Ilya A. [2 ]
机构
[1] HSE Univ, Natl Res Univ Higher Sch Econ, Moscow 101000, Russia
[2] Russian Acad Sci, Inst Design Problems Microelect, Moscow 124681, Russia
[3] Adv Syst Technol, Moscow 124681, Russia
[4] NTProgress, Moscow 115280, Russia
关键词
Standards; Machine learning; Hardware; Memory management; Libraries; Software; Machine learning algorithms; floating point; posit; IEEE; 754; benchmark; PERFORMANCE;
D O I
10.1109/ACCESS.2021.3086669
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Modern computational tasks are often required to not only guarantee predefined accuracy, but get the result fast. Optimizing calculations using floating point numbers, as opposed to integers, is a non-trivial task. For this reason, there is a need to explore new ways to improve such operations. This paper presents analysis and comparison of various floating point formats - float, posit and bfloat. One of the promising areas in which the problem of using such values can be considered to be the most acute is neural networks. That is why we pay special attention to algorithms of linear algebra and artificial intelligence to assess efficiency of new data types in this area. The research results showed that software implementations of posit16 and posit32 have high accuracy, but they are not particularly fast; on the other hand, bfloat16 is not much different from float32 in accuracy, but significantly surpasses it in performance for large amounts of data and complex machine learning algorithms. Thus, posit16 can be used in systems with less stringent performance requirements, as well as in conditions of limited computer memory; and also in cases when bfloat16 cannot provide required accuracy. As for bfloat16, it can speed up systems based on the IEEE 754 standard, but it cannot solve all the problems of conventional floating point arithmetic. Thus, although posits and bfloats are not a full fledged replacement for float, they provide (under certain conditions) advantages that can be useful for implementation of machine learning algorithms.
引用
收藏
页码:82318 / 82324
页数:7
相关论文
共 24 条
[1]  
[Anonymous], **NON-TRADITIONAL**
[2]  
[Anonymous], TESTING SYSTEM FLOAT
[3]  
[Anonymous], GOOGLE SUPERCHARGES
[4]  
[Anonymous], 2014, HIGH PERFORMANCE COM, DOI [10.1007/978-3-319-06486-4_7, DOI 10.1007/978-3-319-06486-47]
[5]  
Dean J, 2020, ISSCC DIG TECH PAP I, P8, DOI 10.1109/ISSCC19947.2020.9063049
[6]   LAPACK - A PORTABLE LINEAR ALGEBRA LIBRARY FOR HIGH-PERFORMANCE COMPUTERS [J].
DEMMEL, J .
CONCURRENCY-PRACTICE AND EXPERIENCE, 1991, 3 (06) :655-666
[7]  
Dongarra J. J., 1993, Proceedings SUPERCOMPUTING '93, P162
[8]  
Gustafson J. L, 2017, POSIT ARITHMETIC, P137
[9]  
Gustafson J. L., 2017, END ERROR UNUM COMPU, P416
[10]  
Gustafson John L., 2017, [Supercomputing Frontiers and Innovations, Supercomputing Frontiers and Innovations], V4, P71