DEPTHFORMER: MULTISCALE VISION TRANSFORMER FOR MONOCULAR DEPTH ESTIMATION WITH GLOBAL LOCAL INFORMATION FUSION

被引:23
作者
Agarwal, Ashutosh [1 ]
Arora, Chetan [1 ]
机构
[1] Indian Inst Technol Delhi, Delhi, India
来源
2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP | 2022年
关键词
depth estimation; transformer; attention; adaptive bins;
D O I
10.1109/ICIP46576.2022.9897187
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Attention-based models such as transformers have shown outstanding performance on dense prediction tasks, such as semantic segmentation, owing to their capability of capturing long-range dependency in an image. However, the benefit of transformers for monocular depth prediction has seldom been explored so far. This paper benchmarks various transformer-based models for the depth estimation task on an indoor NYUV2 dataset and an outdoor KITTI dataset. We propose a novel attention-based architecture, Depthformer for monocular depth estimation that uses multi-head self-attention to produce the multiscale feature maps, which are effectively combined by our proposed decoder network. We also propose a Transbins module that divides the depth range into bins whose center value is estimated adaptively per image. The final depth estimated is a linear combination of bin centers for each pixel. Transbins module takes advantage of the global receptive field using the transformer module in the encoding stage. Experimental results on NYUV2 and KITTI depth estimation benchmark demonstrate that our proposed method improves the state-of-the-art by 3.3%, and 3.3% respectively in terms of Root Mean Squared Error (RMSE). Code is available at https://github.com/ashutosh1807/Depthformer.git.
引用
收藏
页码:3873 / 3877
页数:5
相关论文
共 27 条
  • [21] Mahasseni Behrooz, CVPR 2017
  • [22] Ranftl Ren<prime>e, ICCV 2021
  • [23] Silberman Nathan, ECCV 2012
  • [24] Smith Leslie N., CORR
  • [25] Wang Wenhai, ICCV 2021
  • [26] Xie Enze, NIPS 2021
  • [27] Enforcing geometric constraints of virtual normal for depth prediction
    Yin, Wei
    Liu, Yifan
    Shen, Chunhua
    Yan, Youliang
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 5683 - 5692