On the Analysis of the Depth Error on the Road Plane for Monocular Vision-Based Robot Navigation

被引:0
|
作者
Song, Dezhen [1 ]
Lee, Hyunnam [2 ]
Yi, Jingang
机构
[1] Texas A&M Univ, CSE Dept, College Stn, TX 77843 USA
[2] Samsung Techwin Robot Business, Uichang, South Korea
来源
ALGORITHMIC FOUNDATIONS OF ROBOTICS VIII | 2010年 / 57卷
关键词
MOTION; RECONSTRUCTION; FACTORIZATION; RECOVERY; SLAM;
D O I
暂无
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
A mobile robot equipped with a single camera can take images at different locations to obtain the 3D information of the environment for navigation. The depth information perceived by the robot is critical for obstacle avoidance. Given a calibrated camera, the accuracy of depth computation largely depends on locations where images have been taken. For any given image pair, the depth error in regions close to the camera baseline can be excessively large or even infinite due to the degeneracy introduced by the triangulation in depth computation. Unfortunately, this region often overlaps with the robot's moving direction, which could lead to collisions. To deal with the issue, we analyze depth computation and propose a predictive depth error model as a function of motion parameters. We name the region where the depth error is above a given threshold as an untrusted area. Note that the robot needs to know how its motion affect depth error distribution beforehand, we propose a closed-form model predicting how the untrusted area is distributed on the road plane for given robot/camera positions. The analytical results have been successfully verified in the experiments using a mobile robot.
引用
收藏
页码:301 / +
页数:3
相关论文
共 50 条
  • [1] Detection of moving objects in image plane for robot navigation using monocular vision
    Wang, Yin-Tien
    Sun, Chung-Hsun
    Chiou, Ming-Jang
    EURASIP JOURNAL ON ADVANCES IN SIGNAL PROCESSING, 2012,
  • [2] Vision-Based Hybrid Map Building for Mobile Robot Navigation
    Uezer, Ferit
    Korrapati, Hemanth
    Royer, Eric
    Mezouar, Youcef
    Lee, Sukhan
    INTELLIGENT AUTONOMOUS SYSTEMS 13, 2016, 302 : 135 - 146
  • [3] A survey on vision-based UAV navigation
    Lu, Yuncheng
    Xue, Zhucun
    Xia, Gui-Song
    Zhang, Liangpei
    GEO-SPATIAL INFORMATION SCIENCE, 2018, 21 (01) : 21 - 32
  • [4] GPU-Assisted Learning on an Autonomous Marine Robot for Vision-Based Navigation and Image Understanding
    Manderson, Travis
    Dudek, Gregory
    OCEANS 2018 MTS/IEEE CHARLESTON, 2018,
  • [5] Vision-based Unscented FastSLAM for Mobile Robot
    Qiu, Chunxin
    Zhu, Xiaorui
    Zhao, Xiaobing
    PROCEEDINGS OF THE 10TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA 2012), 2012, : 3758 - 3763
  • [6] Vision-Based Navigation in Image-Guided Interventions
    Mirota, Daniel J.
    Ishii, Masaru
    Hager, Gregory D.
    ANNUAL REVIEW OF BIOMEDICAL ENGINEERING, VOL 13, 2011, 13 : 297 - 319
  • [7] Vision-based Navigation Solution for Autonomous Underwater Vehicles
    Alves, Tiago
    Hormigo, Tiago
    Ventura, Rodrigo
    2022 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS (ICARSC), 2022, : 226 - 231
  • [8] Stereo vision-based autonomous navigation for lunar rovers
    Cui, Pingyuan
    Yue, Fuzhan
    AIRCRAFT ENGINEERING AND AEROSPACE TECHNOLOGY, 2007, 79 (04) : 398 - 405
  • [9] Monocular vision-based depth map extraction method for 2D to 3D video conversion
    Tsai, Tsung-Han
    Fan, Chen-Shuo
    EURASIP JOURNAL ON IMAGE AND VIDEO PROCESSING, 2016,
  • [10] Vision-based Localization and Robot-centric Mapping in Riverine Environments
    Yang, Junho
    Dani, Ashwin
    Chung, Soon-Jo
    Hutchinson, Seth
    JOURNAL OF FIELD ROBOTICS, 2017, 34 (03) : 429 - 450