TPU v4: An Optically Reconfigurable Supercomputer for Machine Learning with Hardware Support for Embeddings

被引:76
作者
Jouppi, Norman P. [1 ]
Kurian, George [1 ]
Li, Sheng [1 ]
Ma, Peter [1 ]
Nagarajan, Rahul [1 ]
Nai, Lifeng [1 ]
Patil, Nishant [1 ]
Subramanian, Suvinay [1 ]
Swing, Andy [1 ]
Towles, Brian [1 ]
Young, Cliff [1 ]
Zhou, Xiang [1 ]
Zhou, Zongwei [1 ]
Patterson, David [1 ]
机构
[1] Google, Mountain View, CA 94043 USA
来源
PROCEEDINGS OF THE 2023 THE 50TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE, ISCA 2023 | 2023年
关键词
Machine learning; domain specific architecture; TPU; GPU; IPU; supercomputer; optical interconnect; reconfigurable; embeddings; large language model; power usage effectiveness; warehouse scale computer; carbon emissions; energy; CO2 equivalent emissions;
D O I
10.1145/3579371.3589350
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In response to innovations in machine learning (ML) models, production workloads changed radically and rapidly. TPU v4 is the fifth Google domain specific architecture (DSA) and its third supercomputer for such ML models. Optical circuit switches (OCSes) dynamically reconfigure its interconnect topology to improve scale, availability, utilization, modularity, deployment, security, power, and performance; users can pick a twisted 3D torus topology if desired. Much cheaper, lower power, and faster than Infiniband, OCSes and underlying optical components are <5% of system cost and <3% of system power. Each TPU v4 includes SparseCores, dataflow processors that accelerate models that rely on embeddings by 5x-7x yet use only 5% of die area and power. Deployed since 2020, TPU v4 outperforms TPU v3 by 2.1x and improves performance/Watt by 2.7x. The TPU v4 supercomputer is 4x larger at 4096 chips and thus nearly 10x faster overall, which along with OCS flexibility and availability allows a large language model to train at an average of similar to 60% of peak FLOPS/second. For similar sized systems, it is similar to 4.3x-4.5x faster than the Graphcore IPU Bow and is 1.2x-1.7x faster and uses 1.3x-1.9x less power than the Nvidia A100. TPU v4s inside the energy-optimized warehouse scale computers of Google Cloud use similar to 2-6x less energy and produce similar to 20x less CO2e than contemporary DSAs in typical on-premise data centers.
引用
收藏
页码:1147 / 1160
页数:14
相关论文
共 64 条
  • [1] Anil R., 2022, 16 ACM C REC SYST
  • [2] [Anonymous], 2015, BloombergOctober 26
  • [3] [Anonymous], 2018, Synthes. Lect. Comput. Archit.
  • [4] Barroso L.A., 2009, Synthesis lectures on computer architecture, V6, P1
  • [5] ILIAC IV SYSTEM
    BOUKNIGHT, WJ
    SAMEH, AH
    SLOTNICK, DL
    MCINTYRE, DE
    DENENBERG, SA
    RANDALL, JM
    [J]. PROCEEDINGS OF THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, 1972, 60 (04): : 369 - +
  • [6] Brown TB, 2020, ADV NEUR IN, V33
  • [7] Twisted Torus Topologies for Enhanced Interconnection Networks
    Camara, Jose M.
    Moreto, Miquel
    Vallejo, Enrique
    Beivide, Ramon
    Miguel-Alonso, Jose
    Martinez, Carmen
    Navaridas, Javier
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2010, 21 (12) : 1765 - 1778
  • [8] Lattice Graphs for High-Scale Interconnection Topologies
    Camarero, Cristobal
    Martinez, Carmen
    Beivide, Ramon
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2015, 26 (09) : 2506 - 2519
  • [9] Chowdhery A., 2022, J. Mach. Learn. Res.
  • [10] Colamco, 2022, Mellanox QM8790-Quantum HDR Switch