An In-Network Architecture for Accelerating Shared-Memory Multiprocessor Collectives

被引:54
作者
Klenk, Benjamin [1 ]
Jiang, Nan [1 ]
Thorson, Greg [1 ]
Dennison, Larry [1 ]
机构
[1] NVIDIA, Santa Clara, CA 95051 USA
来源
2020 ACM/IEEE 47TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA 2020) | 2020年
关键词
PERFORMANCE;
D O I
10.1109/ISCA45697.2020.00085
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The slowdown of single-chip performance scaling combined with the growing demands of computing ever larger problems efficiently has led to a renewed interest in distributed architectures and specialized hardware. Dedicated accelerators for common or critical operations are becoming cost-effective additions to processors, peripherals, and networks. In this paper we focus on one such operation, the All-Reduce, which is both a common and critical feature of neural network training. All-Reduce is impossible to fully parallelize and difficult to amortize, so it benefits greatly from hardware acceleration. We are proposing an accelerator-centric, shared-memory network that improves All-Reduce performance through in-network reductions, as well as accelerating other collectives like Multicast. We propose switch designs to support in-network computation, including two reduction methods that offer trade-offs in implementation complexity and performance. Additionally, we propose network endpoint modifications to further improve collectives. We present simulation results for a 16 GPU system showing that our collective acceleration design improves the All-Reduce operation by up to 2x for large messages and up to 18x for small messages when compared with a state-of-the-art software algorithm, leading up to 1.4x faster DL training times for networks like Transformer. We demonstrate that this design is scalable to large systems and present results for up to 128 GPUs.
引用
收藏
页码:996 / 1009
页数:14
相关论文
共 50 条
[1]  
[Anonymous], 2019, MASSIVELY SCALE YOUR
[2]  
[Anonymous], 2019, MLPERF V05 TRAINING
[3]  
[Anonymous], 2019, P 2 SYML C PAL ALT C
[4]  
[Anonymous], 2019, MLPERF V06 TRAINING
[5]  
[Anonymous], 2019, P 2 SYSML C PAL ALT
[6]  
[Anonymous], 2018, ABS181105233 CORR
[7]  
[Anonymous], 2015, IEEE I CONF COMP VIS, DOI DOI 10.1109/ICCV.2015.123
[8]  
[Anonymous], 2018, ABS180406826 CORR
[9]  
[Anonymous], 2019, ABS190306701 CORR
[10]  
[Anonymous], 2017, GPU TECHN C GTC