LiDAR Inpainting of UAV Based 3D Point Cloud Using Supervised Learning

被引:0
作者
Talha, Muhammad [1 ]
Hussein, Aya [1 ]
Hossny, Mohammed [1 ]
机构
[1] UNSW, Sch Engn & Informat Technol, Canberra, ACT, Australia
来源
ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I | 2024年 / 14471卷
基金
澳大利亚研究理事会;
关键词
LiDAR inpainting; 3D Reconstruction; Point clouds;
D O I
10.1007/978-981-99-8388-9_17
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Unmanned Aerial Vehicles (UAV) can quickly scan unknown environments to support a wide range of operations from intelligence gathering to search and rescue. LiDAR point clouds can give a detailed and accurate 3D representation of such unknown environments. However, LiDAR point clouds are often sparse and miss important information due to occlusions and limited sensor resolution. Several studies used inpainting techniques on LiDAR point clouds to complete the missing regions. However, these studies have three main limitations that hinder their use in UAV-based environment 3D reconstruction. First, existing studies focused only on synthetic data. Second, while the point clouds obtained from a UAV flying at moderate to high speeds can be severely distorted, none of the existing studies applied inpainting to UAV-based LiDAR point clouds. Third, all existing techniques considered inpainting isolated objects and did not generalise to inpainting complete environments. This paper aims to address these gaps by proposing an algorithm for inpainting point clouds of complete 3D environments obtained from a UAV. We use a supervised learning encoder-decoder model for point cloud inpainting and environment reconstruction. We tested the proposed approach for different LiDAR parameters and different environmental settings. The results demonstrate the ability of the system to inpaint the objects with a minimum average Chamfer Distance (CD) loss of 0.028 at a UAV speed of 5 ms(-1). We present the results of the 3D reconstruction for a few test environments.
引用
收藏
页码:203 / 214
页数:12
相关论文
共 23 条
[1]  
Achlioptas P, 2018, PR MACH LEARN RES, V80
[2]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[3]   Point Cloud Scene Completion of Obstructed Building Facades with Generative Adversarial Inpainting [J].
Chen, Jingdao ;
Yi, John Seon Keun ;
Kahoush, Mark ;
Cho, Erin S. ;
Cho, Yong K. .
SENSORS, 2020, 20 (18) :1-27
[4]   A Point Set Generation Network for 3D Object Reconstruction from a Single Image [J].
Fan, Haoqiang ;
Su, Hao ;
Guibas, Leonidas .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2463-2471
[5]  
Foley K., 2022, Parrot AR drone 2.0 elite edition
[6]  
Fu ZQ, 2018, IEEE IMAGE PROC, P2137, DOI 10.1109/ICIP.2018.8451550
[7]   Weakly-Supervised 3D Shape Completion in the Wild [J].
Gu, Jiayuan ;
Ma, Wei-Chiu ;
Manivasagam, Sivabalan ;
Zeng, Wenyuan ;
Wang, Zihao ;
Xiong, Yuwen ;
Su, Hao ;
Urtasun, Raquel .
COMPUTER VISION - ECCV 2020, PT V, 2020, 12350 :283-299
[8]   Deep Fusion Network for Image Completion [J].
Hong, Xin ;
Xiong, Pengfei ;
Ji, Renhe ;
Fan, Haoqiang .
PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, :2033-2042
[9]   Rethinking Image Inpainting via a Mutual Encoder-Decoder with Feature Equalizations [J].
Liu, Hongyu ;
Jiang, Bin ;
Song, Yibing ;
Huang, Wei ;
Yang, Chao .
COMPUTER VISION - ECCV 2020, PT II, 2020, 12347 :725-741
[10]   PF-Net: Point Fractal Network for 3D Point Cloud Completion [J].
Huang, Zitian ;
Yu, Yikuan ;
Xu, Jiawen ;
Ni, Feng ;
Le, Xinyi .
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020), 2020, :7659-7667