A Converting Autoencoder Toward Low-latency and Energy-efficient DNN Inference at the Edge

被引:1
作者
Mahmud, Hasanul [1 ]
Kang, Peng [1 ]
Desai, Kevin [1 ]
Lama, Palden [1 ]
Prasad, Sushil K. [1 ]
机构
[1] Univ Texas San Antonio, Dept Comp Sci, San Antonio, TX 78249 USA
来源
2024 IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM WORKSHOPS, IPDPSW 2024 | 2024年
关键词
Energy-efficiency; Deep Neural Networks; Edge Computing; Early-exit DNNs; Converting Autoencoder;
D O I
10.1109/IPDPSW63119.2024.00117
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Reducing inference time and energy usage while maintaining prediction accuracy has become a significant concern for deep neural networks (DNN) inference on resourcecon-strained edge devices. To address this problem, we propose a novel approach based on "converting" autoencoder and lightweight DNNs. This improves upon recent work such as early-exiting framework and DNN partitioning. Early-exiting frameworks spend different amounts of computation power for different input data depending upon their complexity. However, they can be inefficient in real-world scenarios that deal with many hard image samples. On the other hand, DNN partitioning algorithms that utilize the computation power of both the cloud and edge devices can be affected by network delays and intermittent connections between the cloud and the edge. We present CBNet, a low-latency and energy-efficient DNN inference framework tailored for edge devices. It utilizes a "converting" autoencoder to efficiently transform hard images into easy ones, which are subsequently processed by a lightweight DNN for inference. To the best of our knowledge, such autoencoder has not been proposed earlier. Our experimental results using three popular image-classification datasets on a Raspberry Pi 4, a Google Cloud instance, and an instance with Nvidia Tesla K80 GPU show that CBNet achieves up to 4.8 x speedup in inference latency and 79% reduction in energy usage compared to competing techniques while maintaining similar or higher accuracy.
引用
收藏
页码:592 / 599
页数:8
相关论文
共 38 条
[1]  
Aghasi A., P 31 INT C NEUR INF, P3180
[2]   Deep Learning With Edge Computing: A Review [J].
Chen, Jiasi ;
Ran, Xukan .
PROCEEDINGS OF THE IEEE, 2019, 107 (08) :1655-1674
[3]  
Clanuwat T, 2018, Arxiv, DOI arXiv:1812.01718
[4]  
Deng L., 2012, IEEE SIGNAL PROC MAG, V29, P141
[5]  
Glorot X., 2010, P INT C ART INT STAT, P249
[6]   Intelligence Beyond the Edge: Inference on Intermittent Embedded Systems [J].
Gobieski, Graham ;
Lucia, Brandon ;
Beckmann, Nathan .
TWENTY-FOURTH INTERNATIONAL CONFERENCE ON ARCHITECTURAL SUPPORT FOR PROGRAMMING LANGUAGES AND OPERATING SYSTEMS (ASPLOS XXIV), 2019, :199-213
[7]  
Goodfellow I, 2016, ADAPT COMPUT MACH LE, P1
[8]  
Han S., 2016, 4 INT C LEARN REPR I, P2
[9]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[10]  
Hinton GE, 2011, LECT NOTES COMPUT SC, V6791, P44, DOI 10.1007/978-3-642-21735-7_6