Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System

被引:1
作者
Jang, Hongsun [1 ]
Song, Jaeyong [1 ]
Jung, Jaewon [1 ]
Park, Jaeyoung [2 ,4 ]
Kim, Youngsok [3 ]
Lee, Jinho [1 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Seoul, South Korea
[2] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[3] Yonsei Univ, Dept Comp Sci, Seoul, South Korea
[4] Yonsei Univ, Seoul, South Korea
来源
2024 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, HPCA 2024 | 2024年
基金
新加坡国家研究基金会;
关键词
ARCHITECTURE;
D O I
10.1109/HPCA57654.2024.00034
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The recent huge advance of Large Language Models (LLMs) is mainly driven by the increase in the number of parameters. This has led to substantial memory capacity requirements, necessitating the use of dozens of GPUs just to meet the capacity. One popular solution to this is storage-offloaded training, which uses host memory and storage as an extended memory hierarchy. However, this obviously comes at the cost of storage bandwidth bottleneck because storage devices have orders of magnitude lower bandwidth compared to that of GPU device memories. Our work, Smart-Infinity, addresses the storage bandwidth bottleneck of storage-offloaded LLM training using near-storage processing devices on a real system. The main component of Smart-Infinity is SmartUpdate, which performs parameter updates on custom near-storage accelerators. We identify that moving parameter updates to the storage side removes most of the storage traffic. In addition, we propose an efficient data transfer handler structure to address the system integration issues for Smart-Infinity. The handler allows overlapping data transfers with fixed memory consumption by reusing the device buffer. Lastly, we propose accelerator-assisted gradient compression/decompression to enhance the scalability of Smart-Infinity. When scaling to multiple near-storage processing devices, the write traffic on the shared channel becomes the bottleneck. To alleviate this, we compress the gradients on the GPU and decompress them on the accelerators. It provides further acceleration from reduced traffic. As a result, Smart-Infinity achieves a significant speedup compared to the baseline. Notably, SmartInfinity is a ready-to-use approach that is fully integrated into PyTorch on a real system. The implementation of Smart-Infinity is available at https://github.com/AIS-SNU/smart-infinity.
引用
收藏
页码:345 / 360
页数:16
相关论文
共 129 条
[11]  
Chen CY, 2018, AAAI CONF ARTIF INTE, P2827
[12]  
Chen Chia-Yu, 2020, ADV NEURAL INFORM PR, V33
[13]  
Chen TQ, 2016, Arxiv, DOI arXiv:1604.06174
[14]  
Cho M., 2019, NEURIPS
[15]  
Cho S., 2013, ICS
[16]  
Choi J., 2019, Proc. Mach. Learn. Syst., V1, P348
[17]  
Chowdhery A, 2022, Arxiv, DOI [arXiv:2204.02311, 10.48550/arXiv.2204.02311]
[18]  
Codreanu V, 2017, Arxiv, DOI arXiv:1711.04291
[19]   FPGA HLS Today: Successes, Challenges, and Opportunities [J].
Cong, Jason ;
Lau, Jason ;
Liu, Gai ;
Neuendorffer, Stephen ;
Pan, Peichen ;
Vissers, Kees ;
Zhang, Zhiru .
ACM TRANSACTIONS ON RECONFIGURABLE TECHNOLOGY AND SYSTEMS, 2022, 15 (04)
[20]   PolySA: Polyhedral-Based Systolic Array Auto-Compilation [J].
Cong, Jason ;
Wang, Jie .
2018 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD) DIGEST OF TECHNICAL PAPERS, 2018,