A Multi-Task Vision Transformer for Segmentation and Monocular Depth Estimation for Autonomous Vehicles

被引:12
作者
Bavirisetti, Durga Prasad [1 ]
Martinsen, Herman Ryen [2 ]
Kiss, Gabriel Hanssen [1 ]
Lindseth, Frank [1 ]
机构
[1] Norwegian Univ Sci & Technol, Dept Comp Sci, N-7034 Trondheim, Norway
[2] Capgemin, N-1671 Fredrikstad, Norway
来源
IEEE OPEN JOURNAL OF INTELLIGENT TRANSPORTATION SYSTEMS | 2023年 / 4卷
关键词
Vision transformer; monocular depth prediction; autonomous vehicles; segmentation; multi-task;
D O I
10.1109/OJITS.2023.3335648
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we investigate the use of Vision Transformers for processing and understanding visual data in an autonomous driving setting. Specifically, we explore the use of Vision Transformers for semantic segmentation and monocular depth estimation using only a single image as input. We present state-of-the-art Vision Transformers for these tasks and combine them into a multitask model. Through multiple experiments on four different street image datasets, we demonstrate that the multitask approach significantly reduces inference time while maintaining high accuracy for both tasks. Additionally, we show that changing the size of the Transformer-based backbone can be used as a trade-off between inference speed and accuracy. Furthermore, we investigate the use of synthetic data for pre-training and show that it effectively increases the accuracy of the model when real-world data is limited.
引用
收藏
页码:909 / 928
页数:20
相关论文
共 44 条
[41]  
Xie Enze, 2021, PROC NEURIPS
[42]   MTFormer: Multi-task Learning via Transformer and Cross-Task Reasoning [J].
Xu, Xiaogang ;
Zhao, Hengshuang ;
Vineet, Vibhav ;
Lim, Ser-Nam ;
Torralba, Antonio .
COMPUTER VISION - ECCV 2022, PT XXVII, 2022, 13687 :304-321
[43]  
Yang ZY, 2018, INT C PATT RECOG, P2289, DOI 10.1109/ICPR.2018.8546189
[44]  
Zheng SX, 2021, Arxiv, DOI arXiv:2012.15840