Multi-sensor point cloud data fusion for precise 3D mapping

被引:32
|
作者
Abdelazeem, Mohamed [1 ]
Elamin, Ahmed [2 ]
Afifi, Akram [3 ]
El-Rabbany, Ahmed [2 ]
机构
[1] Aswan Univ, Civil Engn Dept, Aswan, Egypt
[2] Ryerson Univ, Civil Engn Dept, Toronto, ON, Canada
[3] Humber Inst Technol & Adv Learning, Fac Appl Sci & Technol, Toronto, ON, Canada
来源
EGYPTIAN JOURNAL OF REMOTE SENSING AND SPACE SCIENCES | 2021年 / 24卷 / 03期
关键词
Data fusion; Terrestrial laser scanner; UAS; Point cloud; 3D modeling; LASER-SCANNING DATA; DATA INTEGRATION; TERRESTRIAL; LIDAR; PHOTOGRAMMETRY; OPTIMIZATION; ENVIRONMENT; UAV;
D O I
10.1016/j.ejrs.2021.06.002
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Multi-sensor data fusion has recently gained a wide attention within the Geomatics research community, as it helps overcome the limitations of a single sensor and enables a complete 3D model for the structure and a better object classification. This study develops a data fusion algorithm, which optimally combines sensor data from a terrestrial and an unmanned aerial system (UAS) to obtain an improved and a complete 3D mapping model of a structure. Terrestrial laser scanner (TLS) data are collected for the exterior of a building along with the DJI Phantom 4 Pro and terrestrial close-range Sony alpha 7R camera images. A number of ground control points and targets are established throughout the scanned building for the photogrammetric process and scans registration. Different point cloud datasets are generated from the TLS, UAS and the terrestrial Sony camera images. The created point clouds from each individual sensor and the fused point clouds are used in different forms, namely the original, denoised and subsampled point clouds. The denoised point cloud dataset is generated through the application of the statistical outlier remover (SOR) filter on the original point clouds. The relative precision of the 3D models is investigated using the multiscale model-to-model cloud comparison (M3C2) method. The TLS-based 3D model is used as a reference. It is found that the precision of the Sony-based 3D model is higher than the other two models for the original and denoised datasets. The fused Sony/UAS-based model provides a complete 3D model with precision higher than the UAS-based model.(c) 2021 National Authority for Remote Sensing and Space Sciences. Production and hosting by Elsevier B. V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/bync-nd/4.0/).
引用
收藏
页码:835 / 844
页数:10
相关论文
共 50 条
  • [1] Multi-sensor point cloud data fusion for precise 3D mapping
    Abdelazeem, Mohamed
    Elamin, Ahmed
    Afifi, Akram
    El-Rabbany, Ahmed
    Egyptian Journal of Remote Sensing and Space Science, 2021, 24 (03): : 835 - 844
  • [2] 3D Point Cloud Generation Based on Multi-Sensor Fusion
    Han, Yulong
    Sun, Haili
    Lu, Yue
    Zhong, Ruofei
    Ji, Changqi
    Xie, Si
    APPLIED SCIENCES-BASEL, 2022, 12 (19):
  • [3] Asynchronous Multi-Sensor Fusion for 3D Mapping and Localization
    Geneva, Patrick
    Eckenhoff, Kevin
    Huang, Guoquan
    2018 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2018, : 5994 - 5999
  • [4] 3D spatial mapping of roadways based on multi-sensor fusion
    Liu, Feng
    Wang, Hongwei
    Liu, Yu
    Meitan Xuebao/Journal of the China Coal Society, 2024, 49 (09): : 4019 - 4026
  • [5] MULTI-SENSOR DATA FUSION FOR REALISTIC AND ACCURATE 3D RECONSTRUCTION
    Hannachi, Ammar
    Kohler, Sophie
    Lallement, Alex
    Hirsch, Ernest
    2014 5TH EUROPEAN WORKSHOP ON VISUAL INFORMATION PROCESSING (EUVIP 2014), 2014,
  • [6] Distributed Multi Sensor Data Fusion for Autonomous 3D Mapping
    Guivant, Jose E.
    Marden, Samuel
    Pereida, Karime
    2012 INTERNATIONAL CONFERENCE ON INDOOR POSITIONING AND INDOOR NAVIGATION (IPIN), 2012,
  • [7] CUPREDS: Multi-sensor Point Cloud Mapping for Local Navigation
    Tojal, Carlos
    Bento, Luis Conde
    Peixoto, Paulo
    2024 IEEE INTERNATIONAL CONFERENCE ON AUTONOMOUS ROBOT SYSTEMS AND COMPETITIONS, ICARSC, 2024, : 47 - 53
  • [8] Advancing Simultaneous Localization and Mapping with Multi-Sensor Fusion and Point Cloud De-Distortion
    Shao, Haiyan
    Zhao, Qingshuai
    Chen, Hongtang
    Yang, Weixin
    Chen, Bin
    Feng, Zhiquan
    Zhang, Jinkai
    Teng, Hao
    MACHINES, 2023, 11 (06)
  • [9] An Extensible Multi-Sensor Fusion Framework for 3D Imaging
    Siddiqui, Talha Ahmad
    Madhok, Rishi
    O'Toole, Matthew
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 4344 - 4353
  • [10] Fusion of multi-sensor passive and active 3D imagery
    Fay, DA
    Verly, JG
    Braun, MI
    Frost, C
    Racamato, JP
    Waxman, AM
    ENHANCED AND SYNTHETIC VISION 2001, 2001, 4363 : 219 - 230