MoESys: A Distributed and Efficient Mixture-of-Experts Training and Inference System for Internet Services

被引:3
作者
Yu, Dianhai [1 ]
Shen, Liang [1 ]
Hao, Hongxiang [1 ]
Gong, Weibao [1 ]
Wu, Huachao [1 ]
Bian, Jiang [1 ]
Dai, Lirong [2 ]
Xiong, Haoyi [1 ]
机构
[1] Baidu Inc, Beijing 100085, Peoples R China
[2] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, Hefei 230026, Anhui, Peoples R China
关键词
Distributed inference; distributed training; large models for internet services; MoE;
D O I
10.1109/TSC.2024.3399654
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While modern internet services, such as chatbots, search engines, and online advertising, demand the use of large-scale deep neural networks (DNNs), distributed training and inference over heterogeneous computing systems are desired to facilitate these DNN models. Mixture-of-Experts (MoE) is one the most common strategies to lower the cost of training subject to the overall size of models/data through gating and parallelism in a divide-and-conquer fashion. While DeepSpeed Rasley et al. 2020 has made efforts in carrying out large-scale MoE training over heterogeneous infrastructures, the efficiency of training and inference could be further improved from several system aspects, including load balancing, communication/computation efficiency, and memory footprint limits. In this work, we present a novel MoESys that boosts efficiency in both large-scale training and inference. Specifically, in the training procedure, the proposed MoESys adopts an Elastic MoE training strategy with 2D prefetch and Fusion communication over Hierarchical storage, so as to enjoy efficient parallelisms. For scalable inference in a single node, especially when the model size is larger than GPU memory, MoESys builds the CPU-GPU memory jointly into a ring of sections to load the model, and executes the computation tasks across the memory sections in a round-robin manner for efficient inference. We carried out extensive experiments to evaluate MoESys, where MoESys successfully trains a Unified Feature Optimization Zhang et al. 2021 (UFO) model with a Sparsely-Gated Mixture-of-Experts model of 12B parameters in 8 days on 48 A100 GPU cards. The comparison against the state-of-the-art shows that MoESys outperformed DeepSpeed with 33% higher throughput (tokens per second) in training and 13% higher throughput in inference in general. Particularly, under unbalanced MoE Tasks, e.g., UFO, MoESys achieved 64% higher throughput with 18% lower memory footprints.
引用
收藏
页码:2626 / 2639
页数:14
相关论文
共 60 条
[1]  
Aharoni R, 2019, 2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL HLT 2019), VOL. 1, P3874
[2]   AI-Enabled Secure Microservices in Edge Computing: Opportunities and Challenges [J].
Al-Doghman, Firas ;
Moustafa, Nour ;
Khalil, Ibrahim ;
Sohrabi, Nasrin ;
Tari, Zahir ;
Zomaya, Albert Y. .
IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (02) :1485-1504
[3]  
Artetxe M, 2022, Arxiv, DOI arXiv:2112.10684
[4]   Feynman: Federated Learning-Based Advertising for Ecosystems-Oriented Mobile Apps Recommendation [J].
Bian, Jiang ;
Huang, Jizhou ;
Ji, Shilei ;
Liao, Yuan ;
Li, Xuhong ;
Wang, Qingzhong ;
Zhou, Jingbo ;
Dou, Dejing ;
Wang, Yaqing ;
Xiong, Haoyi .
IEEE TRANSACTIONS ON SERVICES COMPUTING, 2023, 16 (05) :3361-3372
[5]   AFCS: Aggregation-Free Spatial-Temporal Mobile Community Sensing [J].
Bian, Jiang ;
Xiong, Haoyi ;
Wang, Zhiyuan ;
Zhou, Jingbo ;
Ji, Shilei ;
Chen, Hongyang ;
Zhang, Daqing ;
Dou, Dejing .
IEEE TRANSACTIONS ON MOBILE COMPUTING, 2023, 22 (09) :5017-5034
[6]  
Brown TB, 2020, ADV NEUR IN, V33
[7]   Multitask learning [J].
Caruana, R .
MACHINE LEARNING, 1997, 28 (01) :41-75
[8]   The Cask Effect of Multi-source Content Delivery: Measurement and Mitigation [J].
Chen, Xi ;
Zhao, Minghao ;
Yang, Xinlei ;
Li, Zhenhua ;
Liu, Yao ;
Li, Zhenyu ;
Liu, Yunhao .
2019 39TH IEEE INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS (ICDCS 2019), 2019, :261-270
[9]   Personalized Recommendation System Based on Collaborative Filtering for IoT Scenarios [J].
Cui, Zhihua ;
Xu, Xianghua ;
Xue, Fei ;
Cai, Xingjuan ;
Cao, Yang ;
Zhang, Wensheng ;
Chen, Jinjun .
IEEE TRANSACTIONS ON SERVICES COMPUTING, 2020, 13 (04) :685-695
[10]  
Dai ZH, 2019, Arxiv, DOI arXiv:1901.02860