YUTO SEMANTIC: A LARGE SCALE AERIAL LIDAR DATASET FOR SEMANTIC SEGMENTATION

被引:0
作者
Yoo, S. [1 ,2 ]
Ko, C. [1 ,2 ]
Sohn, G. [1 ,2 ]
Lee, H. [1 ,2 ]
机构
[1] York Univ, GeoICT Lab, Toronto, ON M3J 1P3, Canada
[2] York Univ, Dept Earth & Space Sci & Engn, Toronto, ON M3J 1P3, Canada
来源
GEOSPATIAL WEEK 2023, VOL. 48-1 | 2023年
基金
加拿大自然科学与工程研究理事会;
关键词
semantic segmentation; aerial imagery; laser scanning; evaluation; test;
D O I
10.5194/isprs-archives-XLVIII-1-W2-2023-209-2023
中图分类号
K85 [文物考古];
学科分类号
0601 ;
摘要
Creating virtual duplicates of the real world has garnered significant attention due to its applications in areas such as autonomous driving, urban planning, and urban mapping. One of the critical tasks in the computer vision community is semantic segmentation of outdoor collected point clouds. The development and research of robust semantic segmentation algorithms heavily rely on precise and comprehensive benchmark datasets. In this paper, we present the York University Teledyne Optech 3D Semantic Segmentation Dataset (YUTO Semantic), a multi-mission large-scale aerial LiDAR dataset specifically designed for 3D point cloud semantic segmentation. The dataset comprises approximately 738 million points, covering an area of 9.46 square kilometers, which results in a high point density of 100 points per square meter. Each point in the dataset is annotated with one of nine semantic classes. Additionally, we conducted performance tests of state-of-the-art algorithms to evaluate their effectiveness in semantic segmentation tasks. The YUTO Semantic dataset serves as a valuable resource for advancing research in 3D point cloud semantic segmentation and contributes to the development of more accurate and robust algorithms for real-world applications. The dataset is available at https://github.com/Yacovitch/YUTO_Semantic.
引用
收藏
页码:209 / 215
页数:7
相关论文
共 27 条
[1]  
Accurics, 2023, Terrascan
[2]  
[Anonymous], 2012, ISPRS Ann. Photogramm., Remote Sens. Spatial Inf. Sci., DOI DOI 10.5194/ISPRSANNALS-I-3-293-2012
[3]   3D Semantic Parsing of Large-Scale Indoor Spaces [J].
Armeni, Iro ;
Sener, Ozan ;
Zamir, Amir R. ;
Jiang, Helen ;
Brilakis, Ioannis ;
Fischer, Martin ;
Savarese, Silvio .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :1534-1543
[4]   SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences [J].
Behley, Jens ;
Garbade, Martin ;
Milioto, Andres ;
Quenzel, Jan ;
Behnke, Sven ;
Stachniss, Cyrill ;
Gall, Juergen .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :9296-9306
[5]  
Caesar H., 2023, P IEEE CVF C COMP VI, P11621
[6]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[7]   ScanNet: Richly-annotated 3D Reconstructions of Indoor Scenes [J].
Dai, Angela ;
Chang, Angel X. ;
Savva, Manolis ;
Halber, Maciej ;
Funkhouser, Thomas ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :2432-2443
[8]   SUM: A benchmark dataset of Semantic Urban Meshes [J].
Gao, Weixiao ;
Nan, Liangliang ;
Boom, Bas ;
Ledoux, Hugo .
ISPRS JOURNAL OF PHOTOGRAMMETRY AND REMOTE SENSING, 2021, 179 :108-120
[9]   Vision meets robotics: The KITTI dataset [J].
Geiger, A. ;
Lenz, P. ;
Stiller, C. ;
Urtasun, R. .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2013, 32 (11) :1231-1237
[10]  
Hackel T, 2017, Arxiv, DOI [arXiv:1704.03847, DOI 10.5194/ISPRS-ANNALS-IV-1-W1-91-2017]