Energy-Efficient and Quality-Assured Approximate Computing Framework Using a Co-Training Method

被引:1
|
作者
Jiang, Li [1 ]
Song, Zhuoran [1 ]
Song, Haiyue [1 ]
Xu, Chengwen [1 ]
Xu, Qiang [2 ]
Jing, Naifeng [1 ]
Zhang, Weifeng [3 ]
Liang, Xiaoyao [1 ]
机构
[1] Shanghai Jiao Tong Univ, 800 Dongchuan Rd, Shanghai 200240, Peoples R China
[2] Chinese Univ Hong Kong, Shatin, Hong Kong, Peoples R China
[3] Alibaba Grp, 969 Wenyi West Rd, Hangzhou, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Approximate computing; error control;
D O I
10.1145/3342239
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Approximate computing is a promising design paradigm that introduces a new dimension-error-into the original design space. By allowing the inexact computation in error-tolerance applications, approximate computing can gain both performance and energy efficiency. A neural network (NN) is a universal approximator in theory and possesses a high level of parallelism. The emerging deep neural network accelerators deployed with NN-based approximator is thereby a promising candidate for approximate computing. Nevertheless, the approximation result must satisfy the users' requirement, and the approximation result varies across different applications. We normally deploy an NN-based classifier to ensure the approximation quality. Only the inputs predicted to meet the quality requirement can be executed by the approximator. The potential of these two NNs, however, is fully explored; the involving of two NNs in approximate computing imposes critical optimization questions, such as two NNs' distinct views of the input data space, how to train the two correlated NNs, and what are their topologies. In this article, we propose a novel NN-based approximate computing framework with quality insurance. We advocate a co-training approach that trains the classifier and the approximator alternately to maximize the agreement of the two NNs on the input space. In each iteration, we coordinate the training of the two NNs with a judicious selection of training data. Next, we explore different selection policies and propose to select training data from multiple iterations, which can enhance the invocation of the approximate accelerator. In addition, we optimize the classifier by integrating a dynamic threshold tuning algorithm to improve the invocation of the approximate accelerator further. The increased invocation of accelerator leads to higher energy efficiency under the same quality requirement. We propose two efficient algorithms to explore the smallest topology of the NN-based approximator and the classifier to achieve the quality requirement. The first algorithm straightforward searches the minimum topology using a greedy strategy. However, the first algorithm incurs too much training overhead. To solve this issue, the second one gradually grows the topology of NNs to match the quality requirement by transferring the learned parameters. Experimental results show significant improvement on the quality and the energy efficiency compared to the existing NN-based approximate computing frameworks.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] SparkXD: A Framework for Resilient and Energy-Efficient Spiking Neural Network Inference using Approximate DRAM
    Putra, Rachmad Vidya Wicaksana
    Hanif, Muhammad Abdullah
    Shafique, Muhammad
    2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 379 - 384
  • [22] An Energy-Efficient Bayesian Neural Network Implementation Using Stochastic Computing Method
    Jia, Xiaotao
    Gu, Huiyi
    Liu, Yuhao
    Yang, Jianlei
    Wang, Xueyan
    Pan, Weitao
    Zhang, Youguang
    Cotofana, Sorin
    Zhao, Weisheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (09) : 12913 - 12923
  • [23] Energy Efficient SVM Classifier Using Approximate Computing
    Zhou, Yangcan
    Lin, Jun
    Wang, Zhongfeng
    2017 IEEE 12TH INTERNATIONAL CONFERENCE ON ASIC (ASICON), 2017, : 1045 - 1048
  • [24] An Energy-efficient Reconfigurable Hybrid DNN Architecture for Speech Recognition with Approximate Computing
    Liu, Bo
    Guo, Shisheng
    Qin, Hai
    Gong, Yu
    Yang, Jinjiang
    Ge, Wei
    Yang, Jun
    2018 IEEE 23RD INTERNATIONAL CONFERENCE ON DIGITAL SIGNAL PROCESSING (DSP), 2018,
  • [25] E-ERA: An energy-efficient reconfigurable architecture for RNNs using dynamically adaptive approximate computing
    Liu, Bo
    Dong, Wei
    Xu, Tingting
    Gong, Yu
    Ge, Wei
    Yang, Jinjiang
    Shi, Longxing
    IEICE ELECTRONICS EXPRESS, 2017, 14 (15):
  • [26] Novel XNOR-based Approximate Computing for Energy-efficient Image Processors
    Kim, Sunghyun
    Kim, Youngmin
    JOURNAL OF SEMICONDUCTOR TECHNOLOGY AND SCIENCE, 2018, 18 (05) : 602 - 608
  • [27] Energy-efficient Accelerator Architecture for Stereo Image Matching using Approximate Computing and Statistical Error Compensation
    Kim, Eric P.
    Shanbhag, Naresh R.
    2014 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP), 2014, : 55 - 59
  • [28] Energy-Efficient Reconfigurable Computing Using Spintronic Memory
    Karam, Robert
    Yang, Kai
    Bhunia, Swarup
    2015 IEEE 58TH INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS), 2015,
  • [29] Energy-Efficient Bayesian Inference Using Bitstream Computing
    Khoram, Soroosh
    Daruwalla, Kyle
    Lipasti, Mikko
    IEEE COMPUTER ARCHITECTURE LETTERS, 2023, 22 (01) : 37 - 40
  • [30] Computing Functional Gains for Designing More Energy-Efficient Buildings Using a Model Reduction Framework
    Akhtar, Imran
    Borggaard, Jeff
    Burns, John
    FLUIDS, 2018, 3 (04):