Energy-Efficient and Quality-Assured Approximate Computing Framework Using a Co-Training Method

被引:1
|
作者
Jiang, Li [1 ]
Song, Zhuoran [1 ]
Song, Haiyue [1 ]
Xu, Chengwen [1 ]
Xu, Qiang [2 ]
Jing, Naifeng [1 ]
Zhang, Weifeng [3 ]
Liang, Xiaoyao [1 ]
机构
[1] Shanghai Jiao Tong Univ, 800 Dongchuan Rd, Shanghai 200240, Peoples R China
[2] Chinese Univ Hong Kong, Shatin, Hong Kong, Peoples R China
[3] Alibaba Grp, 969 Wenyi West Rd, Hangzhou, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Approximate computing; error control;
D O I
10.1145/3342239
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Approximate computing is a promising design paradigm that introduces a new dimension-error-into the original design space. By allowing the inexact computation in error-tolerance applications, approximate computing can gain both performance and energy efficiency. A neural network (NN) is a universal approximator in theory and possesses a high level of parallelism. The emerging deep neural network accelerators deployed with NN-based approximator is thereby a promising candidate for approximate computing. Nevertheless, the approximation result must satisfy the users' requirement, and the approximation result varies across different applications. We normally deploy an NN-based classifier to ensure the approximation quality. Only the inputs predicted to meet the quality requirement can be executed by the approximator. The potential of these two NNs, however, is fully explored; the involving of two NNs in approximate computing imposes critical optimization questions, such as two NNs' distinct views of the input data space, how to train the two correlated NNs, and what are their topologies. In this article, we propose a novel NN-based approximate computing framework with quality insurance. We advocate a co-training approach that trains the classifier and the approximator alternately to maximize the agreement of the two NNs on the input space. In each iteration, we coordinate the training of the two NNs with a judicious selection of training data. Next, we explore different selection policies and propose to select training data from multiple iterations, which can enhance the invocation of the approximate accelerator. In addition, we optimize the classifier by integrating a dynamic threshold tuning algorithm to improve the invocation of the approximate accelerator further. The increased invocation of accelerator leads to higher energy efficiency under the same quality requirement. We propose two efficient algorithms to explore the smallest topology of the NN-based approximator and the classifier to achieve the quality requirement. The first algorithm straightforward searches the minimum topology using a greedy strategy. However, the first algorithm incurs too much training overhead. To solve this issue, the second one gradually grows the topology of NNs to match the quality requirement by transferring the learned parameters. Experimental results show significant improvement on the quality and the energy efficiency compared to the existing NN-based approximate computing frameworks.
引用
收藏
页数:25
相关论文
共 50 条
  • [41] An Embedded Co-processor Architecture for Energy-efficient Stream Computing
    Panda, Amrit
    Chatha, Karam S.
    2014 IEEE 12TH SYMPOSIUM ON EMBEDDED SYSTEMS FOR REAL-TIME MULTIMEDIA (ESTIMEDIA), 2014, : 60 - 69
  • [42] AXSERBUS: A Quality-Configurable Approximate Serial Bus for Energy-Efficient Sensing
    Kim, Younghyun
    Behroozi, Setareh
    Raghunathan, Vijay
    Raghunathan, Anand
    2017 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED), 2017,
  • [43] Energy-Efficient Time Series Analysis Using Transprecision Computing
    Fernandez, Ivan
    Quislant, Ricardo
    Gutierrez, Eladio
    Plata, Oscar
    2020 IEEE 32ND INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE AND HIGH PERFORMANCE COMPUTING (SBAC-PAD 2020), 2020, : 83 - 90
  • [44] Energy-Efficient Sleep Apnea Detection Using a Hyperdimensional Computing Framework Based on Wearable Bracelet Photoplethysmography
    Chen, Tian
    Zhang, Jingtao
    Xu, Zeju
    Redmond, Stephen J.
    Lovell, Nigel H.
    Liu, Guanzheng
    Wang, Changhong
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2024, 71 (08) : 2483 - 2494
  • [45] Approximate High-Performance Computing: A Fast and Energy-Efficient Computing Paradigm in the Post-Moore Era
    Menon, Harshitha
    Diffenderfer, James
    Georgakoudis, Giorgis
    Laguna, Ignacio O.
    Lam, Michael
    Osei-Kuffuor, Daniel
    Parasyris, Konstantinos
    Vanover, Jackson
    IT PROFESSIONAL, 2023, 25 (02) : 7 - 15
  • [46] Designing Energy-Efficient Arithmetic Operators Using Inexact Computing
    Lingamneni, Avinash
    Enz, Christian
    Palem, Krishna
    Piguet, Christian
    JOURNAL OF LOW POWER ELECTRONICS, 2013, 9 (01) : 141 - 153
  • [47] Integrated Sensing and Computing using Energy-Efficient Magnetic Synapses
    Angizi, Shaahin
    Roohi, Arman
    PROCEEDINGS OF THE TWENTY THIRD INTERNATIONAL SYMPOSIUM ON QUALITY ELECTRONIC DESIGN (ISQED 2022), 2022, : 342 - 345
  • [48] Energy-Efficient Online Training with In Situ Parallel Computing on Electrochemical Memory Arrays
    Lu, Yingming
    Yang, Zhen
    Tao, Yaoyu
    Cai, Lei
    Zhang, Teng
    Yan, Longhao
    Huang, Ru
    Yang, Yuchao
    ADVANCED INTELLIGENT SYSTEMS, 2025,
  • [49] Designing Energy-Efficient Approximate Adders using Parallel Genetic Algorithms
    Naseer, Adnan Aquib
    Ashraf, Rizwan A.
    Dechev, Damian
    DeMara, Ronald F.
    IEEE SOUTHEASTCON 2015, 2015,
  • [50] A Novel Energy-Efficient Training Method for Receive Antenna Selection
    Kristem, Vinod
    Mehta, Neelesh B.
    Molisch, Andreas F.
    2010 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS - ICC 2010, 2010,