A SUPERPOINT NEURAL NETWORK IMPLEMENTATION FOR ACCURATE FEATURE EXTRACTION IN UNSTRUCTURED ENVIRONMENTS

被引:0
作者
Petrakis, Georgios [1 ]
机构
[1] Tech Univ Crete, Univ Campus, Akrotiri 73100, Chania, Greece
来源
GEOSPATIAL WEEK 2023, VOL. 48-1 | 2023年
关键词
Feature extraction; deep neural networks; unstructured environments;
D O I
10.5194/isprs-archives-XLVIII-1-W2-2023-1215-2023
中图分类号
K85 [文物考古];
学科分类号
0601 ;
摘要
Feature extraction plays a crucial role in visual localization, SLAM (Simultaneous Localization and Mapping) and autonomous navigation, by enabling the extraction and tracking of distinctive visual features for both mapping and localization tasks. However, most of the studies, investigate the efficiency and performance of the algorithms in urban, vegetated or indoor environments and not in unstructured environments which suffers by poor information in visual cues where a feature extraction algorithm or architecture could base on. In this study, an investigation of SuperPoint architecture's efficiency in keypoint detection and description applied to unstructured and planetary scenes was conducted, producing three different models: (a) an original SuperPoint model trained from scratch, (b) an original fine-tuned SuperPoint model, (c) an optimized SuperPoint model, trained from scratch with the same parametarization as the corresponding original model. For the training process, a dataset of 48 000 images was utilized representing unstructured scenes from Earth, Moon and Mars while a benchmark dataset was used aiming to evaluate the model in illumination and viewpoint changes. The experimentation proved that the optimized SuperPoint model provides superior performance using repeatability and homography estimation metrics, compared with the original SuperPoint models, and handcrafted keypoint detectors and descriptors.
引用
收藏
页码:1215 / 1222
页数:8
相关论文
共 30 条
  • [1] HPatches: A benchmark and evaluation of handcrafted and learned local descriptors
    Balntas, Vassileios
    Lenc, Karel
    Vedaldi, Andrea
    Mikolajczyk, Krystian
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 3852 - 3861
  • [2] Barham P., 2015, TensorFlow: large-scale machine learning on heterogeneous systems
  • [3] Bojanic D, 2019, INT SYMP IMAGE SIG, P64, DOI 10.1109/ISPA.2019.8868792
  • [4] Bradski G, 2000, DR DOBBS J, V25, P120
  • [5] Christiansen P., 2019, Unsuperl'omt: End-to-end Unsupervised Interest Pomt Detector and Descriptor
  • [6] SuperPoint: Self-Supervised Interest Point Detection and Description
    DeTone, Daniel
    Malisiewicz, Tomasz
    Rabinovich, Andrew
    [J]. PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 337 - 349
  • [7] D2-Net: A Trainable CNN for Joint Description and Detection of Local Features
    Dusmanu, Mihai
    Rocco, Ignacio
    Pajdla, Tomas
    Pollefeys, Marc
    Sivic, Josef
    Torii, Akihiko
    Sattler, Torsten
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 8084 - 8093
  • [8] The Devon Island rover navigation dataset
    Furgale, Paul
    Carle, Pat
    Enright, John
    Barfoot, Timothy D.
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2012, 31 (06) : 707 - 713
  • [9] Giubilato R., 2022, Solid-State LiDAR. Inertial Dataset, JELE Robotics and automation letters, V7
  • [10] Learning-Based Methods of Perception and Navigation for Ground Vehicles in Unstructured Environments: A Review
    Guastella, Dario Calogero
    Muscato, Giovanni
    [J]. SENSORS, 2021, 21 (01) : 1 - 22