Occupancy Map Inpainting for Online Robot Navigation

被引:5
作者
Wei, Minghan [1 ]
Lee, Daewon [1 ]
Isler, Volkan [1 ]
Lee, Daniel [1 ]
机构
[1] Samsung AI Ctr New York, 837 Washington St, New York, NY 10014 USA
来源
2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021) | 2021年
关键词
D O I
10.1109/ICRA48506.2021.9561790
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this work, we focus on mobile robot navigation in indoor environments where occlusions and field-of-view limitations hinder onboard sensing capabilities. We show that the footprint of a camera mounted on a robot can be drastically improved using learning-based approaches. Specifically, we consider the task of building an occupancy map for autonomous navigation of a robot equipped with a depth camera. In our approach, a local occupancy map is first computed using measurements from the camera directly. Afterwards, an inpainting network adds further information, the occupancy probabilities of unseen grid cells, to the map. A novel aspect of our approach is that rather than direct supervision from ground truth, we combine the information from a second camera with a better field-of-view for supervision. The training focuses on predicting extensions of the sensed data. To test the effectiveness of our approach, we use a robot setup with a single camera placed at 0.5m above the ground. We compare the navigation performance using raw maps from only this camera's input (baseline) versus using inpainted maps augmented with our network. Our method outperforms the baseline approach even in completely new environments not included in the training set and can yield 21% shorter paths than the baseline approach. A real-time implementation of our method on a mobile robot is also tested in home and office environments.
引用
收藏
页码:8551 / 8557
页数:7
相关论文
共 22 条
  • [1] Anderson Peter, 2018, CoRR
  • [2] [Anonymous], 2017, ROB SCI SYST 13
  • [3] Image inpainting
    Bertalmio, M
    Sapiro, G
    Caselles, V
    Ballester, C
    [J]. SIGGRAPH 2000 CONFERENCE PROCEEDINGS, 2000, : 417 - 424
  • [4] Cheng A, 2019, IEEE INT C INT ROBOT, P7425, DOI [10.1109/iros40897.2019.8967796, 10.1109/IROS40897.2019.8967796]
  • [5] SG-NN: Sparse Generative Neural Networks for Self-Supervised Scene Completion of RGB-D Scans
    Dai, Angela
    Diller, Christian
    Niessner, Matthias
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 846 - 855
  • [6] ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans
    Dai, Angela
    Ritchie, Daniel
    Bokeloh, Martin
    Reed, Scott
    Sturm, Juergen
    Niessner, Matthias
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 4578 - 4587
  • [7] Katyal K, 2019, IEEE INT CONF ROBOT, P5453, DOI [10.1109/icra.2019.8793500, 10.1109/ICRA.2019.8793500]
  • [8] Kim SK., 2019, P INT C AUT PLANN SC, V29, P734, DOI DOI 10.1609/ICAPS.V29I1.3542
  • [9] Kolve Eric, 2017, AI2-THOR: An Interactive 3D Environment for Visual AI, P2
  • [10] Novotny D., 2019, PROC INT C NEURAL IN, P7599