An efficient segmented quantization for graph neural networks

被引:0
作者
Yue Dai
Xulong Tang
Youtao Zhang
机构
[1] University of Pittsburgh,Department of Computer Science
来源
CCF Transactions on High Performance Computing | 2022年 / 4卷
关键词
Graph neural network; Quantization; Accelerator;
D O I
暂无
中图分类号
学科分类号
摘要
Graph Neural Networks (GNNs) are recently developed machine learning approaches that exploit the advances in Neural Networks for a wide range of graph applications. While GNNs achieve promising inference accuracy improvements over conventional approaches, their efficiency suffers from expensive computation and intensive memory access in feature aggregation and combination phases, leading to large inference latency. Recent studies proposed mixed-precision feature quantization to address the memory access overhead. However, its linear approximation and computation complexity become the main constraints for the overall GNN accuracy and performance. In this paper, we propose segmented quantization to partition the feature range into segments and customize linear approximation within each segment based on original value density, and conduct efficient mixed-precision computing between quantized feature and full precision weights. Segmented quantization helps to achieve high inference accuracy while maintaining low computation complexity. We also devise the hardware accelerator to fully explore the benefits of segmented quantization. Our experiments show that up to 5% average accuracy and up to 6.8×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\times$$\end{document} performance improvements can be achieved over the state-of-the-art GNN accelerators.
引用
收藏
页码:461 / 473
页数:12
相关论文
共 18 条
  • [1] Liang S(2020)ENGN: a highthroughput and energy-efficient accelerator for large graph neural networks IEEE Trans. Comput. 70 1511-1525
  • [2] Wang Y(2021)Post-training quantization for vision transformer Adv. Neural Inf. Process. Syst. 34 28092-28103
  • [3] Liu C(2008)A comprehensive memory modeling tool and its application to the design and analysis of future memory hierarchies ACM SIGARCH Comput. Arch. News 36 51-62
  • [4] He L(undefined)undefined undefined undefined undefined-undefined
  • [5] Huawei L(undefined)undefined undefined undefined undefined-undefined
  • [6] Xu D(undefined)undefined undefined undefined undefined-undefined
  • [7] Li X(undefined)undefined undefined undefined undefined-undefined
  • [8] Liu Z(undefined)undefined undefined undefined undefined-undefined
  • [9] Wang Y(undefined)undefined undefined undefined undefined-undefined
  • [10] Han K(undefined)undefined undefined undefined undefined-undefined