Automated deep-learning model optimization framework for microcontrollers

被引:0
作者
Hong, Seungtae [1 ,2 ]
Park, Gunju [1 ]
Kim, Jeong-Si [1 ]
机构
[1] Elect & Telecommun Res Inst, Artificial Intelligence Comp Res Lab, Daejeon, South Korea
[2] Univ Sci & Technol, Daejeon, South Korea
关键词
automated framework; deep learning; memory efficiency; microcontrollers; model optimization;
D O I
10.4218/etrij.2023-0522
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper introduces a framework for optimizing deep-learning models on microcontrollers (MCUs) that is crucial in today's expanding embedded device market. We focus on model optimization techniques, particularly pruning and quantization, to enhance the performance of neural networks within the limited resources of MCUs. Our approach combines automatic iterative optimization and code generation, simplifying MCU model deployment without requiring extensive hardware knowledge. Based on experiments with architectures, such as ResNet-8 and MobileNet v2, our framework substantially reduces the model size and enhances inference speed that are crucial for MCU efficiency. Compared with TensorFlow Lite for MCUs, our optimizations for MobileNet v2 reduce static random-access memory use by 51%-57% and flash use by 17%-62%, while increasing inference speed by approximately 1.55 times. These advancements highlight the impact of our method on performance and memory efficiency, demonstrating its value in embedded artificial intelligence and broad applicability in MCU-based neural network optimization.
引用
收藏
页码:179 / 192
页数:14
相关论文
共 36 条
  • [1] Abadi M., 2016, arXiv
  • [2] Alibaba, TinyNeuralNetwork
  • [3] [Anonymous], 2020, A higherlevel neural network library on microcontrollers NNoM
  • [4] [Anonymous], Embedded Learning Library (Ell)
  • [5] [Anonymous], Renesas E2Studio
  • [6] [Anonymous], STM XCUBE-AI
  • [7] Apache, microTVM
  • [8] ARM, CMSIS-NN
  • [9] Banbury Colby, 2021, P MACH LEARN SYST, V3, P517
  • [10] Banner R, 2019, ADV NEUR IN, V32