Efficient Hardware Approximation for Bit-Decomposition Based Deep Neural Network Accelerators

被引:0
|
作者
Soliman, Taha [1 ]
Eldebiky, Amro [2 ]
De La Parra, Cecilia [1 ]
Guntoro, Andre [1 ]
Wehn, Norbert [3 ]
机构
[1] Robert Bosch GmbH, Renningen, Germany
[2] Tech Univ Munich, Munich, Germany
[3] TU Kaiserslautern, Kaiserslautern, Germany
关键词
Approximate Computing; In-memory Computing; Convolutional Neural Networks; Bit serial accelerators;
D O I
10.1109/SOCC56010.2022.9908072
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, an increasing number of embedded devices for neural network acceleration have adopted hardware approximations as they enable an efficient trade-off between computational resources, power consumption, and network accuracy. In this paper, we present novel approximation techniques for architectures based on bit-decomposition of the multiply and accumulate operations, especially In-memory architectures. For the target architecture, we propose seven different approximations with a high degree of freedom. We explore the design space of these approximations and evaluate their effect on architecture area and power as well as network accuracy loss. Our approximation techniques can achieve up to 26.5% power and 20% area reductions at only 1% accuracy loss.
引用
收藏
页码:77 / 82
页数:6
相关论文
共 50 条
  • [1] Adaptable Approximation Based on Bit Decomposition for Deep Neural Network Accelerators
    Soliman, Taha
    De la Parra, Cecilia
    Guntoro, Andre
    Wehn, Norbert
    2021 IEEE 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS), 2021,
  • [2] An overview memristor based hardware accelerators for deep neural network
    Gokgoz, Baki
    Gul, Fatih
    Aydin, Tolga
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2024, 36 (09):
  • [3] Hardware Approximate Techniques for Deep Neural Network Accelerators: A Survey
    Armeniakos, Giorgos
    Zervakis, Georgios
    Soudris, Dimitrios
    Henkel, Joerg
    ACM COMPUTING SURVEYS, 2023, 55 (04)
  • [4] Surrogate Model based Co-Optimization of Deep Neural Network Hardware Accelerators
    Woehrle, Hendrik
    Alvarez, Mariela De Lucas
    Schlenke, Fabian
    Walsemann, Alexander
    Karagounis, Michael
    Kirchner, Frank
    2021 IEEE INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2021, : 40 - 45
  • [5] Practically Efficient Secure Distributed Exponentiation Without Bit-Decomposition
    Aly, Abdelrahaman
    Abidin, Aysajan
    Nikova, Svetla
    FINANCIAL CRYPTOGRAPHY AND DATA SECURITY, FC 2018, 2018, 10957 : 291 - 309
  • [6] Efficient Bit-Decomposition and Modulus-Conversion Protocols with an Honest Majority
    Kikuchi, Ryo
    Ikarashi, Dai
    Matsuda, Takahiro
    Hamada, Koki
    Chida, Koji
    INFORMATION SECURITY AND PRIVACY, 2018, 10946 : 64 - 82
  • [7] An Overview of Energy-Efficient Hardware Accelerators for On-Device Deep-Neural-Network Training
    Lee, Jinsu
    Yoo, Hoi-Jun
    Yoo, Hoi-Jun (hjyoo@kaist.ac.kr), 2021, Institute of Electrical and Electronics Engineers Inc. (01): : 115 - 128
  • [8] Joint Protection Scheme for Deep Neural Network Hardware Accelerators and Models
    Zhou, Jingbo
    Zhang, Xinmiao
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2023, 42 (12) : 4518 - 4527
  • [9] Exploration and Generation of Efficient FPGA-based Deep Neural Network Accelerators
    Ali, Nermine
    Philippe, Jean-Marc
    Tain, Benoit
    Coussy, Philippe
    2021 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS 2021), 2021, : 123 - 128
  • [10] Efficient Compression Technique for NoC-based Deep Neural Network Accelerators
    Lorandel, Jordane
    Lahdhiri, Habiba
    Bourdel, Emmanuelle
    Monteleone, Salvatore
    Palesi, Maurizio
    2020 23RD EUROMICRO CONFERENCE ON DIGITAL SYSTEM DESIGN (DSD 2020), 2020, : 174 - 179