SOLE: Hardware-Software Co-design of Softmax and LayerNorm for Efficient Transformer Inference

被引:13
作者
Wang, Wenxun [1 ]
Zhou, Shuchang [2 ]
Sun, Wenyu [1 ]
Sun, Peiqin [2 ]
Liu, Yongpan [1 ]
机构
[1] Tsinghua Univ, Dept Elect Engn, Beijing, Peoples R China
[2] MEGVII Technol, Beijing, Peoples R China
来源
2023 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD | 2023年
关键词
Transformers; neural networks; hardware-software co-design; softmax; layer normalization;
D O I
10.1109/ICCAD57390.2023.10323725
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Transformers have shown remarkable performance in both natural language processing (NLP) and computer vision (CV) tasks. However, their real-time inference speed and efficiency are limited due to the inefficiency in Softmax and Layer Normalization (LayerNorm). Previous works based on function approximation suffer from inefficient implementation as they place emphasis on computation while disregarding memory overhead concerns. Moreover, such methods rely on retraining to compensate for approximation error which can be costly and inconvenient. In this paper, we present SOLE, a hardware-software co-design for Softmax and LayerNorm which is composed of E2Softmax and AILayerNorm. E2Softmax utilizes log2 quantization of exponent function and log-based division to approximate Softmax while AILayerNorm adopts low-precision statistic calculation. Compared with state-of-the-art designs, we achieve both low-precision calculation and low bit-width storage on Softmax and LayerNorm. Experiments show that SOLE maintains inference accuracy without retraining while offering orders of magnitude speedup and energy savings over GPU, achieving 3.04x, 3.86x energy-efficiency improvements and 2.82x, 3.32x area-efficiency improvements over prior state-of-the-art custom hardware for Softmax and LayerNorm, respectively.
引用
收藏
页数:9
相关论文
共 38 条
[1]  
Ba JL, 2016, arXiv
[2]  
Brown TB, 2020, ADV NEUR IN, V33
[3]   A Deep Look into Logarithmic Quantization of Model Parameters in Neural Networks [J].
Cai, Jingyong ;
Takemoto, Masashi ;
Nakajo, Hironori .
PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON ADVANCES IN INFORMATION TECHNOLOGY (IAIT2018), 2018,
[4]  
Dehghani M., 2023, arXiv
[5]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[6]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[7]  
Dosovitskiy A, 2021, Arxiv, DOI [arXiv:2010.11929, DOI 10.48550/ARXIV.2010.11929]
[8]  
Driess D, 2023, Arxiv, DOI arXiv:2303.03378
[9]   A3: Accelerating Attention Mechanisms in Neural Networks with Approximation [J].
Ham, Tae Jun ;
Jung, Sung Jun ;
Kim, Seonghak ;
Oh, Young H. ;
Park, Yeonhong ;
Song, Yoonho ;
Park, Jung-Hun ;
Lee, Sanghee ;
Park, Kyoung ;
Lee, Jae W. ;
Jeong, Deog-Kyoon .
2020 IEEE INTERNATIONAL SYMPOSIUM ON HIGH PERFORMANCE COMPUTER ARCHITECTURE (HPCA 2020), 2020, :328-341
[10]  
Kim S., 2021, P 38 INT C MACH LEAR, P5506