Video Demo: Unsupervised Learning of Depth and Ego-Motion from Cylindrical Panoramic Video

被引:0
|
作者
Sharma, Alisha [1 ]
Ventura, Jonathan [2 ]
机构
[1] Naval Res Lab, Labs Computat Phys & Fluid Dynam, Washington, DC 20375 USA
[2] Calif Polytech State Univ San Luis Obispo, Dept Comp Sci & Software Engn, San Luis Obispo, CA 93407 USA
基金
美国国家科学基金会;
关键词
computer vision; structure-from-motion; unsupervised learning; panoramic video;
D O I
10.1109/AIVR46125.2019.00059
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this demonstration, we will present a video showing depth predictions for street-level 360 degrees panoramic footage generated using our unsupervised learning model. Panoramic depth estimation is important for a range of applications including virtual reality, 3D modeling, and autonomous robotic navigation. We have developed a convolutional neural network (CNN) model for unsupervised learning of depth and ego-motion from cylindrical panoramic video. In contrast with previous works, we focus on cylindrical panoramic projection. Unlike spherical or cube map projection, cylindrical projection is fully compatible with traditional CNN layers while still supporting a continuous 360 degrees horizontal field of view. We find that this increased field of view improves the ego-motion prediction accuracy for street-level video input. This abstract motivates our work in unsupervised structure-from-motion estimation, describes the video demonstration, outlines our implementation, and summarizes our study conclusions.
引用
收藏
页码:255 / 256
页数:2
相关论文
共 50 条
  • [1] Unsupervised Learning of Depth and Ego-Motion from Cylindrical Panoramic Video
    Sharma, Alisha
    Ventura, Jonathan
    2019 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND VIRTUAL REALITY (AIVR), 2019, : 58 - 65
  • [2] Unsupervised Learning of Depth and Ego-Motion from Cylindrical Panoramic Video with Applications for Virtual Reality
    Sharma, Alisha
    Nett, Ryan
    Ventura, Jonathan
    INTERNATIONAL JOURNAL OF SEMANTIC COMPUTING, 2020, 14 (03) : 333 - 356
  • [3] Unsupervised Learning of Depth and Ego-Motion from Video
    Zhou, Tinghui
    Brown, Matthew
    Snavely, Noah
    Lowe, David G.
    30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 6612 - +
  • [4] Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video
    Bian, Jia-Wang
    Li, Zhichao
    Wang, Naiyan
    Zhan, Huangying
    Shen, Chunhua
    Cheng, Ming-Ming
    Reid, Ian
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [5] APAC-Net: Unsupervised Learning of Depth and Ego-Motion from Monocular Video
    Lin, Rui
    Lu, Yao
    Lu, Guangming
    INTELLIGENCE SCIENCE AND BIG DATA ENGINEERING: VISUAL DATA ENGINEERING, PT I, 2019, 11935 : 336 - 348
  • [6] Unsupervised Ego-Motion and Dense Depth Estimation with Monocular Video
    Xu, Yufan
    Wang, Yan
    Guo, Lei
    2018 IEEE 18TH INTERNATIONAL CONFERENCE ON COMMUNICATION TECHNOLOGY (ICCT), 2018, : 1306 - 1310
  • [7] Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
    Mahjourian, Reza
    Wicke, Martin
    Angelova, Anelia
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 5667 - 5675
  • [8] Unsupervised Learning of Depth and Ego-Motion from Continuous Monocular Images
    Wang, Zhuo
    Huang, Min
    Huang, Xiao-Long
    Ma, Fei
    Dou, Jia-Ming
    Lyu, Jian-Li
    Journal of Computers (Taiwan), 2021, 32 (06) : 38 - 51
  • [9] Unsupervised monocular depth and ego-motion learning with structure and semantics
    Casser, Vincent
    Pirk, Soeren
    Mahjourian, Reza
    Angelova, Anelia
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 381 - 388
  • [10] Unsupervised Learning of Monocular Depth and Ego-Motion Using Multiple Masks
    Wang, Guangming
    Wang, Hesheng
    Liu, Yiling
    Chen, Weidong
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4724 - 4730