Deploying Machine Learning Models to Ahead-of-Time Runtime on Edge Using MicroTVM

被引:0
|
作者
Liu, Chen [1 ]
Jobst, Matthias [1 ]
Jobst, Matthias [1 ]
Guo, Liyuan [1 ]
Shi, Xinyue [1 ]
Partzsch, Johannes [1 ]
Mayr, Christian [1 ]
机构
[1] Tech Univ Dresden, Dresden, Germany
来源
PROCEEDINGS 2023 IEEE/ACM INTERNATIONAL WORKSHOP ON COMPILERS, DEPLOYMENT, AND TOOLING FOR EDGE AI, CODAI 2023 | 2023年
关键词
TVM; MicroTVM; model deployment; BYOC; UMA;
D O I
10.1145/3615338.3618125
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In the past few years, more and more AI applications have been applied to edge devices. However, models trained by data scientists with machine learning frameworks, such as PyTorch or TensorFlow, can not be seamlessly executed on edge. In this paper, we develop an end-to-end code generator parsing a pre-trained model to C source libraries for the backend using MicroTVM, a machine learning compiler framework extension addressing inference on bare metal devices. An analysis shows that specific compute-intensive operators can be easily offloaded to the dedicated accelerator with a Universal Modular Accelerator (UMA) interface, while others are processed in the CPU cores. By using the automatically generated ahead-of-time C runtime, we conduct a hand gesture recognition experiment on an ARM Cortex M4F core.
引用
收藏
页码:37 / 40
页数:4
相关论文
empty
未找到相关数据