Unsupervised Visual Ego-motion Learning for Robots

被引:0
|
作者
Khalilbayli, Fidan [1 ]
Bayram, Baris [1 ]
Ince, Gokhan [1 ]
机构
[1] Istanbul Tech Univ, Comp Engn Dept, Istanbul, Turkey
关键词
Image Processing; Robotics; Visual Ego-motion Estimation; Unsupervised Learning; Sensor Fusion;
D O I
10.1109/ubmk.2019.8907192
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Despite changing conditions and mobile objects in its environment, the goal of an autonomous robot is to navigate without any human operator. For achieving this, the knowledge of both the dynamic state of the robot and the 3D structure of the environment must be known. While discretizing a continuous image sequence, three types of motion can be observed: 1) a mobile camera (or a robot as in this study) capturing a static scene, 2) independent moving objects in front of a static camera, or 3) simultaneously moving camera and independent objects in the environment at any given time. One of the challenges is the existence of one or multiple mobile objects in the environment which have independent velocity and direction with respect to the environment. In this work, a visual ego-motion estimation with unsupervised learning for robots by using stereo video taken by a camera mounted on them is introduced. Also, audio perception is utilized to support visual ego-motion estimation by sensor fusion in order to identifying the source of the motion. We have verified the effectiveness of our approach by conducting three different experiments in which both robot and object, sole robot and sole object motion were present.
引用
收藏
页码:676 / 681
页数:6
相关论文
共 50 条
  • [1] Towards Visual Ego-motion Learning in Robots
    Pillai, Sudeep
    Leonard, John J.
    2017 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2017, : 5533 - 5540
  • [2] Unsupervised Learning of Depth and Ego-Motion from Video
    Zhou, Tinghui
    Brown, Matthew
    Snavely, Noah
    Lowe, David G.
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6612 - +
  • [3] Unsupervised monocular depth and ego-motion learning with structure and semantics
    Casser, Vincent
    Pirk, Soeren
    Mahjourian, Reza
    Angelova, Anelia
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 381 - 388
  • [4] Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple Masks
    Wang, Guangming
    Wang, Hesheng
    Liu, Yiling
    Chen, Weidong
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4724 - 4730
  • [5] Unsupervised Learning of Depth and Ego-Motion from Continuous Monocular Images
    Wang, Zhuo
    Huang, Min
    Huang, Xiao-Long
    Ma, Fei
    Dou, Jia-Ming
    Lyu, Jian-Li
    Journal of Computers (Taiwan), 2021, 32 (06) : 38 - 51
  • [6] Unsupervised Learning of Monocular Depth and Ego-Motion using Conditional PatchGANs
    Vankadari, Madhu
    Kumar, Swagat
    Majumder, Anima
    Das, Kaushik
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 5677 - 5684
  • [7] Unsupervised Learning of Depth and Ego-Motion from Cylindrical Panoramic Video
    Sharma, Alisha
    Ventura, Jonathan
    2019 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY (AIVR), 2019, : 58 - 65
  • [8] Unsupervised Learning of Monocular Depth and Ego-Motion in Outdoor/Indoor Environments
    Gao, Ruipeng
    Xiao, Xuan
    Xing, Weiwei
    Li, Chi
    Liu, Lei
    IEEE INTERNET OF THINGS JOURNAL, 2022, 9 (17) : 16247 - 16258
  • [9] PoseConvGRU: A Monocular Approach for Visual Ego-motion Estimation by Learning
    Zhai, Guangyao
    Liu, Liang
    Zhang, Linjian
    Liu, Yong
    Jiang, Yunliang
    PATTERN RECOGNITION, 2020, 102
  • [10] Practical ego-motion estimation for mobile robots
    Schärer, S
    Baltes, J
    Anderson, J
    2004 IEEE CONFERENCE ON ROBOTICS, AUTOMATION AND MECHATRONICS, VOLS 1 AND 2, 2004, : 921 - 926