End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles

被引:3
|
作者
Gu, Junyi [1 ]
Lind, Artjom [2 ]
Chhetri, Tek Raj [3 ,4 ]
Bellone, Mauro [5 ]
Sell, Raivo [1 ]
机构
[1] Tallinn Univ Technol Tallinn, Dept Mech & Ind Engn, EE-12616 Tallinn, Estonia
[2] Univ Tartu, Inst Comp Sci, Intelligent Transportat Syst Lab, EE-51009 Tartu, Estonia
[3] Univ Innsbruck, Semant Technol Inst STI Innsbruck, Dept Comp Sci, A-6020 Innsbruck, Austria
[4] Ctr Artificial Intelligence AI Res Nepal, Sundarharaincha 56604, Nepal
[5] Tallinn Univ Technol, FinEst Ctr Smart Cities, EE-19086 Tallinn, Estonia
关键词
multimodal sensors; autonomous driving; dataset collection framework; sensor calibration and synchronization; sensor fusion; CALIBRATION; CAMERA; ROAD; VISION; FUSION; RADAR;
D O I
10.3390/s23156783
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Autonomous driving vehicles rely on sensors for the robust perception of their surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR, and radar systems raise requirements related to sensor calibration and synchronization, which are the fundamental blocks of any autonomous system. On the other hand, sensor fusion and integration have become important aspects of autonomous driving research and directly determine the efficiency and accuracy of advanced functions such as object detection and path planning. Classical model-based estimation and data-driven models are two mainstream approaches to achieving such integration. Most recent research is shifting to the latter, showing high robustness in real-world applications but requiring large quantities of data to be collected, synchronized, and properly categorized. However, there are two major research gaps in existing works: (i) they lack fusion (and synchronization) of multi-sensors, camera, LiDAR and radar; and (ii) generic scalable, and user-friendly end-to-end implementation. To generalize the implementation of the multi-sensor perceptive system, we introduce an end-to-end generic sensor dataset collection framework that includes both hardware deploying solutions and sensor fusion algorithms. The framework prototype integrates a diverse set of sensors, such as camera, LiDAR, and radar. Furthermore, we present a universal toolbox to calibrate and synchronize three types of sensors based on their characteristics. The framework also includes the fusion algorithms, which utilize the merits of three sensors, namely, camera, LiDAR, and radar, and fuse their sensory information in a manner that is helpful for object detection and tracking research. The generality of this framework makes it applicable in any robotic or autonomous applications and suitable for quick and large-scale practical deployment.
引用
收藏
页数:25
相关论文
共 50 条
  • [41] End-to-End Learning of Autonomous Vehicle Lateral Control via MPC Training
    Mentasti, Simone
    Bersani, Mattia
    Arrigoni, Stefano
    Matteucci, Matteo
    Cheli, Federico
    INTELLIGENT AUTONOMOUS SYSTEMS 16, IAS-16, 2022, 412 : 195 - 211
  • [42] From Edge to Keypoint: An End-to-End Framework For Indoor Layout Estimation
    Zhang, Weidong
    Zhang, Qian
    Zhang, Wei
    Gu, Jianjun
    Li, Yibin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 4483 - 4490
  • [43] End-to-End Multimodal Emotion Recognition Based on Facial Expressions and Remote Photoplethysmography Signals
    Li, Jixiang
    Peng, Jianxin
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (10) : 6054 - 6063
  • [44] Multi-Modal Sensor Fusion-Based Deep Neural Network for End-to-End Autonomous Driving With Scene Understanding
    Huang, Zhiyu
    Lv, Chen
    Xing, Yang
    Wu, Jingda
    IEEE SENSORS JOURNAL, 2021, 21 (10) : 11781 - 11790
  • [45] End-to-end varifocal multiview images coding framework from data acquisition end to vision application end
    Wu, Kejun
    Liu, Qiong
    Wang, Yi
    Yang, You
    OPTICS EXPRESS, 2023, 31 (07) : 11659 - 11679
  • [46] End-to-End Neural Network for Autonomous Steering using LiDAR Point Cloud Data
    Yi, Xianyong
    Ghazzai, Hakim
    Massoud, Yehia
    2022 IEEE 65TH INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS (MWSCAS 2022), 2022,
  • [47] BEV-TP: End-to-End Visual Perception and Trajectory Prediction for Autonomous Driving
    Lang, Bo
    Li, Xin
    Chuah, Mooi Choo
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2024, 25 (11) : 18537 - 18546
  • [48] Robust Neural Network for Sim-to-Real Gap in End-to-End Autonomous Driving
    Pareigis, Stephan
    Maass, Fynn Luca
    PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS (ICINCO), 2022, : 113 - 119
  • [49] CNN-based End-to-end Autonomous Driving on FPGA Using TVM and VTA
    Uetsuki Toshihiro
    Okuyama Yuichi
    Shin Jungpil
    2021 IEEE 14TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP (MCSOC 2021), 2021, : 140 - 144
  • [50] End-to-End Learning with Memory Models for Complex Autonomous Driving Tasks in Indoor Environments
    Zhihui Lai
    Thomas Bräunl
    Journal of Intelligent & Robotic Systems, 2023, 107