Energy-Efficient and Quality-Assured Approximate Computing Framework Using a Co-Training Method

被引:1
|
作者
Jiang, Li [1 ]
Song, Zhuoran [1 ]
Song, Haiyue [1 ]
Xu, Chengwen [1 ]
Xu, Qiang [2 ]
Jing, Naifeng [1 ]
Zhang, Weifeng [3 ]
Liang, Xiaoyao [1 ]
机构
[1] Shanghai Jiao Tong Univ, 800 Dongchuan Rd, Shanghai 200240, Peoples R China
[2] Chinese Univ Hong Kong, Shatin, Hong Kong, Peoples R China
[3] Alibaba Grp, 969 Wenyi West Rd, Hangzhou, Zhejiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Approximate computing; error control;
D O I
10.1145/3342239
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Approximate computing is a promising design paradigm that introduces a new dimension-error-into the original design space. By allowing the inexact computation in error-tolerance applications, approximate computing can gain both performance and energy efficiency. A neural network (NN) is a universal approximator in theory and possesses a high level of parallelism. The emerging deep neural network accelerators deployed with NN-based approximator is thereby a promising candidate for approximate computing. Nevertheless, the approximation result must satisfy the users' requirement, and the approximation result varies across different applications. We normally deploy an NN-based classifier to ensure the approximation quality. Only the inputs predicted to meet the quality requirement can be executed by the approximator. The potential of these two NNs, however, is fully explored; the involving of two NNs in approximate computing imposes critical optimization questions, such as two NNs' distinct views of the input data space, how to train the two correlated NNs, and what are their topologies. In this article, we propose a novel NN-based approximate computing framework with quality insurance. We advocate a co-training approach that trains the classifier and the approximator alternately to maximize the agreement of the two NNs on the input space. In each iteration, we coordinate the training of the two NNs with a judicious selection of training data. Next, we explore different selection policies and propose to select training data from multiple iterations, which can enhance the invocation of the approximate accelerator. In addition, we optimize the classifier by integrating a dynamic threshold tuning algorithm to improve the invocation of the approximate accelerator further. The increased invocation of accelerator leads to higher energy efficiency under the same quality requirement. We propose two efficient algorithms to explore the smallest topology of the NN-based approximator and the classifier to achieve the quality requirement. The first algorithm straightforward searches the minimum topology using a greedy strategy. However, the first algorithm incurs too much training overhead. To solve this issue, the second one gradually grows the topology of NNs to match the quality requirement by transferring the learned parameters. Experimental results show significant improvement on the quality and the energy efficiency compared to the existing NN-based approximate computing frameworks.
引用
收藏
页数:25
相关论文
共 50 条
  • [1] Energy-efficient computing with approximate multipliers
    Pilipović, Ratko
    Bulić, Patricio
    Lotrič, Uroš
    Elektrotehniski Vestnik/Electrotechnical Review, 2022, 89 (03): : 117 - 123
  • [2] Energy-efficient computing with approximate multipliers
    Pilipovic, Ratko
    Bulic, Patricio
    Lotric, Uros
    ELEKTROTEHNISKI VESTNIK, 2022, 89 (03): : 117 - 123
  • [3] AxNN: Energy-Efficient Neuromorphic Systems using Approximate Computing
    Venkataramani, Swagath
    Ranjan, Ashish
    Roy, Kaushik
    Raghunathan, Anand
    PROCEEDINGS OF THE 2014 IEEE/ACM INTERNATIONAL SYMPOSIUM ON LOW POWER ELECTRONICS AND DESIGN (ISLPED), 2014, : 27 - 32
  • [4] Energy-Efficient ConvNets Through Approximate Computing
    Moons, Bert
    De Brabandere, Bert
    Van Gool, Luc
    Verhelst, Marian
    2016 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2016), 2016,
  • [5] Energy-Efficient Neural Computing with Approximate Multipliers
    Sarwar, Syed Shakib
    Venkataramani, Swagath
    Ankit, Aayush
    Raghunathan, Anand
    Roy, Kaushik
    ACM JOURNAL ON EMERGING TECHNOLOGIES IN COMPUTING SYSTEMS, 2018, 14 (02)
  • [6] Approximate Computing: An Emerging Paradigm For Energy-Efficient Design
    Han, Jie
    Orshansky, Michael
    2013 18TH IEEE EUROPEAN TEST SYMPOSIUM (ETS 2013), 2013,
  • [7] Approximate LSTM Computing for Energy-Efficient Speech Recognition
    Jo, Junseo
    Kung, Jaeha
    Lee, Youngjoo
    ELECTRONICS, 2020, 9 (12) : 1 - 13
  • [8] Approximate Computing: An Energy-Efficient Computing Technique for Error Resilient Applications
    Roy, Kaushik
    Raghunathan, Anand
    2015 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI, 2015, : 473 - 475
  • [9] Energy-Efficient IoT-Health Monitoring System using Approximate Computing
    Ghosh, Avrajit
    Raha, Arnab
    Mukherjee, Amitava
    INTERNET OF THINGS, 2020, 9
  • [10] VADF: Versatile Approximate Data Formats for Energy-Efficient Computing
    Mishra, Vishesh
    Mittal, Sparsh
    Hassan, Neelofar
    Singhal, Rekha
    Chatterjee, Urbi
    ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2023, 22 (05)