Transformers in Unsupervised Structure-from-Motion

被引:0
作者
Chawla, Hemang [1 ,2 ]
Varma, Arnav [1 ]
Arani, Elahe [1 ,2 ]
Zonooz, Bahram [1 ,2 ]
机构
[1] NavInfo Europe, Adv Res Lab, Eindhoven, Netherlands
[2] Eindhoven Univ Technol, Dept Math & Comp Sci, Eindhoven, Netherlands
来源
COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VISIGRAPP 2022 | 2023年 / 1815卷
关键词
Structure-from-motion; Monocular depth estimation; Monocular pose estimation; Camera calibration; Natural corruptions; Adversarial attacks; VISION;
D O I
10.1007/978-3-031-45725-8_14
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformers have revolutionized deep learning based computer vision with improved performance as well as robustness to natural corruptions and adversarial attacks. Transformers are used predominantly for 2D vision tasks, including image classification, semantic segmentation, and object detection. However, robots and advanced driver assistance systems also require 3D scene understanding for decision making by extracting structure-from-motion (SfM). We propose a robust transformer-based monocular SfM method that learns to predict monocular pixel-wise depth, ego vehicle's translation and rotation, as well as camera's focal length and principal point, simultaneously. With experiments on KITTI and DDAD datasets, we demonstrate how to adapt different vision transformers and compare them against contemporary CNN-based methods. Our study shows that transformer-based architecture, though lower in run-time efficiency, achieves comparable performance while being more robust against natural corruptions, as well as untargeted and targeted attacks. (Code: https://github.com/NeurAI-Lab/MT-SfMLearner).
引用
收藏
页码:281 / 303
页数:23
相关论文
共 60 条
[21]  
Hendrycks D., 2019, P 7 INT C LEARN REPR
[22]  
Huang Hanxun, 2021, Advances in Neural Information Processing Systems, V34
[23]   A Comprehensive Study of Vision Transformers on Dense Prediction Tasks [J].
Jeeveswaran, Kishaan ;
Kathiresan, Senthilkumar ;
Varma, Arnav ;
Magdy, Omar ;
Zonooz, Bahram ;
Arani, Elahe .
PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 4, 2022, :213-223
[24]   GLMNet: Graph learning-matching convolutional networks for feature matching * [J].
Jiang, Bo ;
Sun, Pengfei ;
Luo, Bin .
PATTERN RECOGNITION, 2022, 121
[25]   Self-supervised Monocular Trained Depth Estimation using Self-attention and Discrete Disparity Volume [J].
Johnston, Adrian ;
Carneiro, Gustavo .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, :4755-4764
[26]  
Kästner L, 2020, IEEE INT CONF ROBOT, P1135, DOI [10.1109/icra40945.2020.9197155, 10.1109/ICRA40945.2020.9197155]
[27]  
Kingma D. P., 2014, arXiv
[28]  
Kline J, 2020, PROCEEDINGS OF THE 2020 32ND INTERNATIONAL TELETRAFFIC CONGRESS (ITC 32), P1, DOI [10.1109/ITC3249928.2020.00009, 10.1007/978-3-030-58565-5_35]
[29]   Evaluation of CNN-Based Single-Image Depth Estimation Methods [J].
Koch, Tobias ;
Liebel, Lukas ;
Fraundorfer, Friedrich ;
Koerner, Marco .
COMPUTER VISION - ECCV 2018 WORKSHOPS, PT III, 2019, 11131 :331-348
[30]  
Kurakin A., 2016, Adversarial examples in the physical world