GMLake: Efficient and Transparent GPU Memory Defragmentation for Large-scale DNN Training with Virtual Memory Stitching

被引:1
作者
Guo, Cong [1 ]
Zhang, Rui [2 ]
Xu, Jiale [1 ]
Leng, Jingwen [1 ]
Liu, Zihan [1 ]
Huang, Ziyu [1 ]
Guo, Minyi [1 ]
Wu, Hao [2 ]
Zhao, Shouren [2 ]
Zhao, Junping [2 ]
Zhang, Ke [2 ]
机构
[1] Shanghai Jiao Tong Univ, Shanghai Qi Zhi Inst, Shanghai, Peoples R China
[2] Ant Grp, Hangzhou, Peoples R China
来源
PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS, ASPLOS 2024, VOL 2 | 2024年
基金
国家重点研发计划; 中国国家自然科学基金;
关键词
Memory Defragmentation; GPU; Deep Learning; Virtual Memory Stitching;
D O I
10.1145/3620665.3640423
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Large-scale deep neural networks (DNNs), such as large language models (LLMs), have revolutionized the artificial intelligence (AI) field and become increasingly popular. However, training or fine-tuning such models requires substantial computational power and resources, where the memory capacity of a single acceleration device like a GPU is one of the most important bottlenecks. Owing to the prohibitively large overhead (e.g., 10x) of GPUs' native memory allocator, DNN frameworks like PyTorch and TensorFlow adopt a caching allocator that maintains a memory pool with a splitting mechanism for fast memory (de)allocation. Unfortunately, the caching allocator's efficiency degrades quickly for popular memory reduction techniques such as recomputation, offloading, distributed training, and low-rank adaptation. The primary reason is that those memory reduction techniques introduce frequent and irregular memory (de)allocation requests, leading to severe fragmentation problems for the splitting-based caching allocator. To mitigate this fragmentation problem, we propose a novel memory allocation framework based on low-level GPU virtual memory management called GPU memory lake (GMLake). GMLake employs a novel virtual memory stitching (VMS) mechanism, which can fuse or combine non-contiguous memory blocks with a virtual memory address mapping. GMLake can reduce average of 9.2 GB (up to 25 GB) GPU memory usage and 15% (up to 33%) fragmentation among eight LLM models on GPU A100 with 80 GB memory. GMLake is completely transparent to the DNN models and memory reduction techniques and ensures the seamless execution of resource-intensive deep-learning tasks. We have opensourced GMLake at https://github.com/intelligent-machinelearning/glake/tree/main/GMLake.
引用
收藏
页码:450 / 466
页数:17
相关论文
共 92 条
  • [1] Abadi M, 2016, Tensorflow: Large-scale machine learning on heterogeneous distributed systems
  • [2] Achiam J., 2024, GPT-4 Technical Report, DOI DOI 10.48550/ARXIV.2303.08774
  • [3] Aigner M, 2015, ACM SIGPLAN NOTICES, V50, P451, DOI [10.1145/2814270.2814294, 10.1145/2858965.2814294]
  • [4] AI Accelerator Embedded Computational Storage for Large-Scale DNN Models
    Aim, Byungmin
    Jang, Jaehun
    Na, Hanbyeul
    Seo, Mankeun
    Son, Hongrak
    Song, Yong Ho
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 483 - 486
  • [5] [Anonymous], 2018, Automatic Differentiation In PyTorch
  • [6] Beaumont O, 2021, ADV NEUR IN, V34
  • [7] Bertsch A, 2023, Arxiv, DOI arXiv:2305.01625
  • [8] Black S., 2022, Gpt-neox- 20b: An open-source autoregressive language model
  • [9] Brown TB, 2020, ADV NEUR IN, V33
  • [10] Chen TQ, 2018, PROCEEDINGS OF THE 13TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P579