End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles

被引:3
|
作者
Gu, Junyi [1 ]
Lind, Artjom [2 ]
Chhetri, Tek Raj [3 ,4 ]
Bellone, Mauro [5 ]
Sell, Raivo [1 ]
机构
[1] Tallinn Univ Technol Tallinn, Dept Mech & Ind Engn, EE-12616 Tallinn, Estonia
[2] Univ Tartu, Inst Comp Sci, Intelligent Transportat Syst Lab, EE-51009 Tartu, Estonia
[3] Univ Innsbruck, Semant Technol Inst STI Innsbruck, Dept Comp Sci, A-6020 Innsbruck, Austria
[4] Ctr Artificial Intelligence AI Res Nepal, Sundarharaincha 56604, Nepal
[5] Tallinn Univ Technol, FinEst Ctr Smart Cities, EE-19086 Tallinn, Estonia
关键词
multimodal sensors; autonomous driving; dataset collection framework; sensor calibration and synchronization; sensor fusion; CALIBRATION; CAMERA; ROAD; VISION; FUSION; RADAR;
D O I
10.3390/s23156783
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Autonomous driving vehicles rely on sensors for the robust perception of their surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR, and radar systems raise requirements related to sensor calibration and synchronization, which are the fundamental blocks of any autonomous system. On the other hand, sensor fusion and integration have become important aspects of autonomous driving research and directly determine the efficiency and accuracy of advanced functions such as object detection and path planning. Classical model-based estimation and data-driven models are two mainstream approaches to achieving such integration. Most recent research is shifting to the latter, showing high robustness in real-world applications but requiring large quantities of data to be collected, synchronized, and properly categorized. However, there are two major research gaps in existing works: (i) they lack fusion (and synchronization) of multi-sensors, camera, LiDAR and radar; and (ii) generic scalable, and user-friendly end-to-end implementation. To generalize the implementation of the multi-sensor perceptive system, we introduce an end-to-end generic sensor dataset collection framework that includes both hardware deploying solutions and sensor fusion algorithms. The framework prototype integrates a diverse set of sensors, such as camera, LiDAR, and radar. Furthermore, we present a universal toolbox to calibrate and synchronize three types of sensors based on their characteristics. The framework also includes the fusion algorithms, which utilize the merits of three sensors, namely, camera, LiDAR, and radar, and fuse their sensory information in a manner that is helpful for object detection and tracking research. The generality of this framework makes it applicable in any robotic or autonomous applications and suitable for quick and large-scale practical deployment.
引用
收藏
页数:25
相关论文
共 50 条
  • [21] An End-to-End Curriculum Learning Approach for Autonomous Driving Scenarios
    Anzalone, Luca
    Barra, Paola
    Barra, Silvio
    Castiglione, Aniello
    Nappi, Michele
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (10) : 19817 - 19826
  • [22] Hierarchical Interpretable Imitation Learning for End-to-End Autonomous Driving
    Teng, Siyu
    Chen, Long
    Ai, Yunfeng
    Zhou, Yuanye
    Xuanyuan, Zhe
    Hu, Xuemin
    IEEE TRANSACTIONS ON INTELLIGENT VEHICLES, 2023, 8 (01): : 673 - 683
  • [23] End-to-End Deep Conditional Imitation Learning for Autonomous Driving
    Abdou, Mohammed
    Kamal, Hanan
    El-Tantawy, Samah
    Abdelkhalek, Ali
    Adel, Omar
    Hamdy, Karim
    Abaas, Mustafa
    31ST INTERNATIONAL CONFERENCE ON MICROELECTRONICS (IEEE ICM 2019), 2019, : 346 - 350
  • [24] UniAda: Universal Adaptive Multiobjective Adversarial Attack for End-to-End Autonomous Driving Systems
    Zhang, Jingyu
    Keung, Jacky Wai
    Xiao, Yan
    Liao, Yihan
    Li, Yishu
    Ma, Xiaoxue
    IEEE TRANSACTIONS ON RELIABILITY, 2024, 73 (04) : 1892 - 1906
  • [25] An End-to-End Deep Neural Network for Autonomous Driving Designed for Embedded Automotive Platforms
    Kocic, Jelena
    Jovicic, Nenad
    Drndarevic, Vujo
    SENSORS, 2019, 19 (09)
  • [26] End-to-end multimodal affect recognition in real-world environments
    Tzirakis, Panagiotis
    Chen, Jiaxin
    Zafeiriou, Stefanos
    Schuller, Bjorn
    INFORMATION FUSION, 2021, 68 : 46 - 53
  • [27] End-to-End Autonomous Driving Decision Based on Deep Reinforcement Learning
    Huang Z.-Q.
    Qu Z.-W.
    Zhang J.
    Zhang Y.-X.
    Tian R.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2020, 48 (09): : 1711 - 1719
  • [28] End-to-End Autonomous Driving Decision Based on Deep Reinforcement Learning
    Huang, Zhiqing
    Zhang, Ji
    Tian, Rui
    Zhang, Yanxin
    CONFERENCE PROCEEDINGS OF 2019 5TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND ROBOTICS (ICCAR), 2019, : 658 - 662
  • [29] Intermediate Tasks Enhanced End-to-End Autonomous Driving with Uncertainty Estimation
    Huang, Xuean
    Su, Jianmei
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 133 - 138
  • [30] Performance optimization of autonomous driving control under end-to-end deadlines
    Bai, Yunhao
    Li, Li
    Wang, Zejiang
    Wang, Xiaorui
    Wang, Junmin
    REAL-TIME SYSTEMS, 2022, 58 (04) : 509 - 547