Driven to Distraction: Self-Supervised Distractor Learning for Robust Monocular Visual Odometry in Urban Environments

被引:0
|
作者
Barnes, Dan [1 ]
Maddern, Will [1 ]
Pascoe, Geoffrey [1 ]
Posner, Ingmar [1 ]
机构
[1] Univ Oxford, Dept Engn Sci, Oxford Robot Inst, Oxford, England
基金
英国工程与自然科学研究理事会;
关键词
D O I
暂无
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We present a self-supervised approach to ignoring "distractors" in camera images for the purposes of robustly estimating vehicle motion in cluttered urban environments. We leverage offline multi-session mapping approaches to automatically generate a per-pixel ephemerality mask and depth map for each input image, which we use to train a deep convolutional network. At run-time we use the predicted ephemerality and depth as an input to a monocular visual odometry (VO) pipeline, using either sparse features or dense photometric matching. Our approach yields metric-scale VO using only a single camera and can recover the correct egomotion even when 90% of the image is obscured by dynamic, independently moving objects. We evaluate our robust VO methods on more than 400km of driving from the Oxford RobotCar Dataset and demonstrate reduced odometry drift and significantly improved egomotion estimation in the presence of large moving vehicles in urban traffic.
引用
收藏
页码:1894 / 1900
页数:7
相关论文
共 50 条
  • [1] LEARNING BY INERTIA: SELF-SUPERVISED MONOCULAR VISUAL ODOMETRY FOR ROAD VEHICLES
    Wang, Chengze
    Yuan, Yuan
    Wang, Qi
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2252 - 2256
  • [2] Online Self-Supervised Monocular Visual Odometry for Ground Vehicles
    Lee, Rhoram
    Daniilidis, Kostas
    Lee, Daniel D.
    2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2015, : 5232 - 5238
  • [3] Keypoint Heatmap Guided Self-Supervised Monocular Visual Odometry
    Haixin Xiu
    Yiyou Liang
    Hui Zeng
    Journal of Intelligent & Robotic Systems, 2022, 105
  • [4] Self-supervised Pretraining and Finetuning for Monocular Depth and Visual Odometry
    Antsfeldi, Leonid
    Chidlovskii, Boris
    2024 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2024), 2024, : 14669 - 14676
  • [5] Keypoint Heatmap Guided Self-Supervised Monocular Visual Odometry
    Xiu, Haixin
    Liang, Yiyou
    Zeng, Hui
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2022, 105 (04)
  • [6] MotionHint: Self-Supervised Monocular Visual Odometry with Motion Constraints
    Wang, Cong
    Wang, Yu-Ping
    Manocha, Dinesh
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022,
  • [7] Transformer-Based Self-Supervised Monocular Depth and Visual Odometry
    Zhao, Hongru
    Qiao, Xiuquan
    Ma, Yi
    Tafazolli, Rahim
    IEEE SENSORS JOURNAL, 2023, 23 (02) : 1436 - 1446
  • [8] Enhancing self-supervised monocular depth estimation with traditional visual odometry
    Andraghetti, Lorenzo
    Myriokefalitakis, Panteleimon
    Dovesi, Pier Luigi
    Luque, Belen
    Poggi, Matteo
    Pieropan, Alessandro
    Mattoccia, Stefano
    2019 INTERNATIONAL CONFERENCE ON 3D VISION (3DV 2019), 2019, : 424 - 433
  • [9] Self-supervised monocular visual odometry based on cross-correlation
    Hu, Jiaxin
    Tao, Bo
    Qian, Xinbo
    Jiang, Du
    Li, Gongfa
    MEASUREMENT SCIENCE AND TECHNOLOGY, 2024, 35 (08)
  • [10] A self-supervised monocular odometry with visual-inertial and depth representations
    Zhao, Lingzhe
    Xiang, Tianyu
    Wang, Zhuping
    JOURNAL OF THE FRANKLIN INSTITUTE-ENGINEERING AND APPLIED MATHEMATICS, 2024, 361 (06):