MTL-Split: Multi-Task Learning for Edge Devices using Split Computing

被引:0
作者
Capogrosso, Luigi [1 ]
Fraccaroli, Enrico [1 ,2 ]
Chakraborty, Samarjit [2 ]
Fummi, Franco [1 ]
Cristani, Marco [1 ]
机构
[1] Univ Verona, Dept Engn Innovat Med, Verona, Italy
[2] Univ North Carolina Chapel Hill, Dept Comp Sci, Chapel Hill, NC USA
来源
PROCEEDINGS OF THE 61ST ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2024 | 2024年
关键词
Split Computing; Multi-Task Learning; Deep Neural Networks; Edge Devices;
D O I
10.1145/3649329.3655686
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Split Computing (SC), where a Deep Neural Network (DNN) is intelligently split with a part of it deployed on an edge device and the rest on a remote server is emerging as a promising approach. It allows the power of DNNs to be leveraged for latency-sensitive applications that do not allow the entire DNN to be deployed remotely, while not having sufficient computation bandwidth available locally. In many such embedded systems scenarios, such as those in the automotive domain, computational resource constraints also necessitate Multi-Task Learning (MTL), where the same DNN is used for multiple inference tasks instead of having dedicated DNNs for each task, which would need more computing bandwidth. However, how to partition such a multi-tasking DNN to be deployed within a SC framework has not been sufficiently studied. This paper studies this problem, and MTL-Split, our novel proposed architecture, shows encouraging results on both synthetic and real-world data. The source code is available at https://github.com/intelligolabs/MTL-Split.
引用
收藏
页数:6
相关论文
共 31 条
[21]   Distilled Split Deep Neural Networks for Edge-Assisted Real-Time Systems [J].
Matsubara, Yoshitomo ;
Baidya, Sabur ;
Callegaro, Davide ;
Levorato, Marco ;
Singh, Sameer .
PROCEEDINGS OF THE 2019 WORKSHOP ON HOT TOPICS IN VIDEO ANALYTICS AND INTELLIGENT EDGES (HOTEDGEVIDEO '19), 2019, :21-26
[22]  
Navon A, 2022, PR MACH LEARN RES
[23]  
Paszke A, 2019, ADV NEUR IN, V32
[24]   Cut, Distil and Encode (CDE): Split Cloud-Edge Deep Inference [J].
Sbai, Marion ;
Saputra, Muhamad Risqi U. ;
Trigoni, Niki ;
Markham, Andrew .
2021 18TH ANNUAL IEEE INTERNATIONAL CONFERENCE ON SENSING, COMMUNICATION, AND NETWORKING (SECON), 2021,
[25]  
Sener Ozan, 2018, Advances in Neural Information Processing Systems, V31
[26]  
Simonyan K, 2015, Arxiv, DOI arXiv:1409.1556
[27]  
Standley T., 2020, Proceedings of the 37th International Conference on Machine learning, V119, P9120, DOI [10.5555/3524938.3525784, DOI 10.5555/3524938.3525784, DOI 10.48550/ARXIV.1905.07553]
[28]  
Tan MX, 2019, PR MACH LEARN RES, V97
[29]   MTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning [J].
Xu, Xiaogang ;
Zhao, Hengshuang ;
Vineet, Vibhav ;
Lim, Ser-Nam ;
Torralba, Antonio .
COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 :304-321
[30]   Taskonomy: Disentangling Task Transfer Learning [J].
Zamir, Amir R. ;
Sax, Alexander ;
Shen, William ;
Guibas, Leonidas ;
Malik, Jitendra ;
Savarese, Silvio .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :3712-3722