Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System

被引:1
作者
Jang, Hongsun [1 ]
Song, Jaeyong [1 ]
Jung, Jaewon [1 ]
Park, Jaeyoung [2 ,4 ]
Kim, Youngsok [3 ]
Lee, Jinho [1 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Seoul, South Korea
[2] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[3] Yonsei Univ, Dept Comp Sci, Seoul, South Korea
[4] Yonsei Univ, Seoul, South Korea
来源
2024 IEEE INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE COMPUTER ARCHITECTURE, HPCA 2024 | 2024年
基金
新加坡国家研究基金会;
关键词
ARCHITECTURE;
D O I
10.1109/HPCA57654.2024.00034
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The recent huge advance of Large Language Models (LLMs) is mainly driven by the increase in the number of parameters. This has led to substantial memory capacity requirements, necessitating the use of dozens of GPUs just to meet the capacity. One popular solution to this is storage-offloaded training, which uses host memory and storage as an extended memory hierarchy. However, this obviously comes at the cost of storage bandwidth bottleneck because storage devices have orders of magnitude lower bandwidth compared to that of GPU device memories. Our work, Smart-Infinity, addresses the storage bandwidth bottleneck of storage-offloaded LLM training using near-storage processing devices on a real system. The main component of Smart-Infinity is SmartUpdate, which performs parameter updates on custom near-storage accelerators. We identify that moving parameter updates to the storage side removes most of the storage traffic. In addition, we propose an efficient data transfer handler structure to address the system integration issues for Smart-Infinity. The handler allows overlapping data transfers with fixed memory consumption by reusing the device buffer. Lastly, we propose accelerator-assisted gradient compression/decompression to enhance the scalability of Smart-Infinity. When scaling to multiple near-storage processing devices, the write traffic on the shared channel becomes the bottleneck. To alleviate this, we compress the gradients on the GPU and decompress them on the accelerators. It provides further acceleration from reduced traffic. As a result, Smart-Infinity achieves a significant speedup compared to the baseline. Notably, SmartInfinity is a ready-to-use approach that is fully integrated into PyTorch on a real system. The implementation of Smart-Infinity is available at https://github.com/AIS-SNU/smart-infinity.
引用
收藏
页码:345 / 360
页数:16
相关论文
共 129 条
[31]  
Farmahini-Farahani A, 2015, INT S HIGH PERF COMP, P283, DOI 10.1109/HPCA.2015.7056040
[32]   TETRIS: Scalable and efficient neural network acceleration with 3D memory [J].
Gao M. ;
Pu J. ;
Yang X. ;
Horowitz M. ;
Kozyrakis C. .
1600, Association for Computing Machinery, 2 Penn Plaza, Suite 701, New York, NY 10121-0701, United States (52) :751-764
[33]   GenStore: A High-Performance In-Storage Processing System for Genome Sequence Analysis [J].
Ghiasi, Nika Mansouri ;
Park, Jisung ;
Mustafa, Harun ;
Kim, Jeremie ;
Olgun, Ataberk ;
Gollwitzer, Arvid ;
Cali, Damla Senol ;
Firtina, Can ;
Mao, Haiyu ;
Alserr, Nour Almadhoun ;
Ausavarungnirun, Rachata ;
Vijaykumar, Nandita ;
Alser, Mohammed ;
Mutlu, Onur .
ASPLOS '22: PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS, 2022, :635-654
[34]  
github, pybind11
[35]  
github, Xilinx OpenCL extension
[36]  
github, Deepspeed
[37]  
Goyal P, 2018, Arxiv, DOI arXiv:1706.02677
[38]   Biscuit: A Framework for Near-Data Processing of Big Data Workloads [J].
Gu, Boncheol ;
Yoon, Andre S. ;
Bae, Duck-Ho ;
Jo, Insoon ;
Lee, Jinyoung ;
Yoon, Jonghyun ;
Kang, Jeong-Uk ;
Kwon, Moonsang ;
Yoon, Chanho ;
Cho, Sangyeun ;
Jeong, Jaeheon ;
Chang, Duckhyun .
2016 ACM/IEEE 43RD ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2016, :153-165
[39]  
Gu YX, 2022, PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), P8410
[40]  
h3platform, Falcon 4109