ZeRO: Memory Optimizations Toward Training Trillion Parameter Models

被引:289
作者
Rajbhandari, Samyam
Rasley, Jeff
Ruwase, Olatunji
He, Yuxiong
机构
来源
PROCEEDINGS OF SC20: THE INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS (SC20) | 2020年
关键词
D O I
10.1109/SC41405.2020.00024
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Large deep learning models offer significant accuracy gains, but training billions to trillions of parameters is challenging. Existing solutions such as data and model parallelisms exhibit fundamental limitations to fit these models into limited device memory, while obtaining computation, communication and development efficiency. We develop a novel solution, Zero Redundancy Optimizer (ZeRO), to optimize memory, vastly improving training speed while increasing the model size that can be efficiently trained. ZeRO eliminates memory redundancies in data- and model-parallel training while retaining low communication volume and high computational granularity, allowing us to scale the model size proportional to the number of devices with sustained high efficiency. Our analysis on memory requirements and communication volume demonstrates: ZeRO has the potential to scale beyond 1 Trillion parameters using today's hardware. We implement and evaluate ZeRO: it trains large models of over 100B parameter with super-linear speedup on 400 GPUs, achieving throughput of 15 Petaflops. This represents an 8x increase in model size and 10x increase in achievable performance over state-of-the-art. In terms of usability, ZeRO can train large models of up to 13B parameters (e.g., larger than Megatron GPT 8.3B and T5 11B) without requiring model parallelism which is harder for scientists to apply. Last but not the least, researchers have used the system breakthroughs of ZeRO to create Turing-NW, the world's largest language model at the time (17B parameters) with record breaking accuracy.
引用
收藏
页数:16
相关论文
共 32 条
[1]  
Anil Rohan, 2019, ABS190111150 ARXIV
[2]  
[Anonymous], ABS160406174 CORR
[3]  
[Anonymous], 2017, NVIDIA TESLA V100 GP
[4]  
Devlin J, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P4171
[5]  
Duchi J, 2011, J MACH LEARN RES, V12, P2121
[6]  
Ginsburg, 2017, CORR
[7]  
Harlap Aaron, 2018, ABS180603377 CORR
[8]  
Huang Yanping, 2018, ABS181106965 ARXIV
[9]   Gist: Efficient Data Encoding for Deep Neural Network Training [J].
Jain, Animesh ;
Phanishayee, Amar ;
Mars, Jason ;
Tang, Lingjia ;
Pekhimenko, Gennady .
2018 ACM/IEEE 45TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA), 2018, :776-789
[10]  
Jain Paras, 2019, ABS191002653 ARXIV