RASH: Reliable Deep Learning Acceleration using Sparsity-based Hardware

被引:0
作者
Kundu, Shamik [1 ]
Raha, Arnab [2 ]
Mathaikutty, Deepak A. [2 ]
Basu, Kanad [1 ]
机构
[1] Intel Corp, Adv Architecture Res, NPU IP, CGAI CCG, Santa Clara, CA 95054 USA
[2] Univ Texas Dallas, Dept Elect & Comp Engn, Richardson, TX 75083 USA
来源
2024 25TH INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN, ISQED 2024 | 2024年
关键词
Sparsity; Deep Learning Accelerator; Reliability;
D O I
10.1109/ISQED60706.2024.10528741
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we present a novel sparsity-based acceleration logic, that leverages both-sided fine-grained sparsity in the activation and weight tensors to skip ineffectual computations, thereby implementing an efficient convolution engine in a hardware accelerator. However, as demonstrated in this paper, such circuit-level faults, manifested in the sparsity logic can result in graceless degradation in classification accuracy, as well as control failure in the sparse DNN accelerator in mission mode. To circumvent this, we propose RASH, a Reliable deep learning Acceleration framework using Sparsity-based Hardware, that enables in-field detection of faults manifested in the sparsity logic of the DNN accelerator.
引用
收藏
页数:1
相关论文
共 3 条
  • [1] Kundu, 2023, Patent No. [18/304,713, 18304713]
  • [2] A Hardware-Software Blueprint for Flexible Deep Learning Specialization
    Moreau, Thierry
    Chen, Tianqi
    Vega, Luis
    Roesch, Jared
    Yan, Eddie
    Zheng, Lianmin
    Fromm, Josh
    Jiang, Ziheng
    Ceze, Luis
    Guestrin, Carlos
    Krishnamurthy, Arvind
    [J]. IEEE MICRO, 2019, 39 (05) : 8 - 16
  • [3] SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks
    Parashar, Angshuman
    Rhu, Minsoo
    Mukkara, Anurag
    Puglielli, Antonio
    Venkatesan, Rangharajan
    Khailany, Brucek
    Emer, Joel
    Keckler, Stephen W.
    Dally, William J.
    [J]. 44TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA 2017), 2017, : 27 - 40