AdaPT: Fast Emulation of Approximate DNN Accelerators in PyTorch

被引:11
|
作者
Danopoulos, Dimitrios [1 ]
Zervakis, Georgios [2 ]
Siozios, Kostas [3 ]
Soudris, Dimitrios [1 ]
Henkel, Joerg [2 ]
机构
[1] Natl Tech Univ Athens, Sch Elect & Comp Engn, Athens 15780, Greece
[2] Karlsruhe Inst Technol, Chair Embedded Syst, D-76131 Karlsruhe, Germany
[3] Aristotle Univ Thessaloniki, Dept Phys, Thessaloniki 54124, Greece
关键词
Accelerator; approximate computing; deep neural network (DNN); PyTorch; quantization; DESIGN;
D O I
10.1109/TCAD.2022.3212645
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Current state-of-the-art employs approximate multipliers to address the highly increased power demands of deep neural network (DNN) accelerators. However, evaluating the accuracy of approximate DNNs is cumbersome due to the lack of adequate support for approximate arithmetic in DNN frameworks. We address this inefficiency by presenting AdaPT, a fast emulation framework that extends PyTorch to support approximate inference as well as approximation-aware retraining. AdaPT can be seamlessly deployed and is compatible with the most DNNs. We evaluate the framework on several DNN models and application fields, including CNNs, LSTMs, and GANs for a number of approximate multipliers with distinct bitwidth values. The results show substantial error recovery from approximate retraining and reduced inference time up to 53.9x with respect to the baseline approximate implementation.
引用
收藏
页码:2074 / 2078
页数:5
相关论文
共 50 条
  • [21] Control Variate Approximation for DNN Accelerators
    Zervakis, Georgios
    Spantidi, Ourania
    Anagnostopoulos, Iraklis
    Amrouch, Hussam
    Henkel, Joerg
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 481 - 486
  • [22] Performance Characterization of DNN Training using TensorFlow and PyTorch on Modern Clusters
    Jain, Arpan
    Awan, Ammar Ahmad
    Anthony, Quentin
    Subramoni, Hari
    Panda, Dhableswar K.
    2019 IEEE INTERNATIONAL CONFERENCE ON CLUSTER COMPUTING (CLUSTER), 2019, : 58 - 68
  • [23] Fast emulation of density functional theory simulations using approximate Gaussian processes
    Stetzler, Steven
    Grosskopf, Michael
    Lawrence, Earl
    APPLICATIONS OF MACHINE LEARNING 2022, 2022, 12227
  • [24] High-Throughput Approximate Multiplication Models in PyTorch
    Trommer, Elias
    Waschneck, Bernd
    Kumar, Akash
    2023 26TH INTERNATIONAL SYMPOSIUM ON DESIGN AND DIAGNOSTICS OF ELECTRONIC CIRCUITS AND SYSTEMS, DDECS, 2023, : 79 - 82
  • [25] Soft errors in DNN accelerators: A comprehensive review
    Ibrahim, Younis
    Wang, Haibin
    Liu, Junyang
    Wei, Jinghe
    Chen, Li
    Rech, Paolo
    Adam, Khalid
    Guo, Gang
    MICROELECTRONICS RELIABILITY, 2020, 115 (115)
  • [26] Targeting DNN Inference Via Efficient Utilization of Heterogeneous Precision DNN Accelerators
    Spantidi, Ourania
    Zervakis, Georgios
    Alsalamin, Sami
    Roman-Ballesteros, Isai
    Henkel, Joerg
    Amrouch, Hussam
    Anagnostopoulos, Iraklis
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (01) : 112 - 125
  • [27] Increasing Throughput of In-Memory DNN Accelerators by Flexible Layerwise DNN Approximation
    De la Parra, Cecilia
    Soliman, Taha
    Guntoro, Andre
    Kumar, Akash
    Wehn, Norbert
    IEEE MICRO, 2022, 42 (06) : 17 - 24
  • [28] Energy Efficient Computing with Heterogeneous DNN Accelerators
    Hossain, Md Shazzad
    Savidis, Ioannis
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [29] PYNQ-Torch: a framework to develop PyTorch accelerators on the PYNQ platform
    Vohra, Manohar
    Fasciani, Stefano
    2019 IEEE 19TH INTERNATIONAL SYMPOSIUM ON SIGNAL PROCESSING AND INFORMATION TECHNOLOGY (ISSPIT 2019), 2019,
  • [30] Flexion: A Quantitative Metric for Flexibility in DNN Accelerators
    Kwon, Hyoukjun
    Pellauer, Michael
    Parashar, Angshuman
    Krishna, Tushar
    IEEE COMPUTER ARCHITECTURE LETTERS, 2021, 20 (01) : 1 - 4