End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles

被引:3
|
作者
Gu, Junyi [1 ]
Lind, Artjom [2 ]
Chhetri, Tek Raj [3 ,4 ]
Bellone, Mauro [5 ]
Sell, Raivo [1 ]
机构
[1] Tallinn Univ Technol Tallinn, Dept Mech & Ind Engn, EE-12616 Tallinn, Estonia
[2] Univ Tartu, Inst Comp Sci, Intelligent Transportat Syst Lab, EE-51009 Tartu, Estonia
[3] Univ Innsbruck, Semant Technol Inst STI Innsbruck, Dept Comp Sci, A-6020 Innsbruck, Austria
[4] Ctr Artificial Intelligence AI Res Nepal, Sundarharaincha 56604, Nepal
[5] Tallinn Univ Technol, FinEst Ctr Smart Cities, EE-19086 Tallinn, Estonia
关键词
multimodal sensors; autonomous driving; dataset collection framework; sensor calibration and synchronization; sensor fusion; CALIBRATION; CAMERA; ROAD; VISION; FUSION; RADAR;
D O I
10.3390/s23156783
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Autonomous driving vehicles rely on sensors for the robust perception of their surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR, and radar systems raise requirements related to sensor calibration and synchronization, which are the fundamental blocks of any autonomous system. On the other hand, sensor fusion and integration have become important aspects of autonomous driving research and directly determine the efficiency and accuracy of advanced functions such as object detection and path planning. Classical model-based estimation and data-driven models are two mainstream approaches to achieving such integration. Most recent research is shifting to the latter, showing high robustness in real-world applications but requiring large quantities of data to be collected, synchronized, and properly categorized. However, there are two major research gaps in existing works: (i) they lack fusion (and synchronization) of multi-sensors, camera, LiDAR and radar; and (ii) generic scalable, and user-friendly end-to-end implementation. To generalize the implementation of the multi-sensor perceptive system, we introduce an end-to-end generic sensor dataset collection framework that includes both hardware deploying solutions and sensor fusion algorithms. The framework prototype integrates a diverse set of sensors, such as camera, LiDAR, and radar. Furthermore, we present a universal toolbox to calibrate and synchronize three types of sensors based on their characteristics. The framework also includes the fusion algorithms, which utilize the merits of three sensors, namely, camera, LiDAR, and radar, and fuse their sensory information in a manner that is helpful for object detection and tracking research. The generality of this framework makes it applicable in any robotic or autonomous applications and suitable for quick and large-scale practical deployment.
引用
收藏
页数:25
相关论文
共 50 条
  • [31] Performance optimization of autonomous driving control under end-to-end deadlines
    Yunhao Bai
    Li Li
    Zejiang Wang
    Xiaorui Wang
    Junmin Wang
    Real-Time Systems, 2022, 58 : 509 - 547
  • [32] Multi-modal policy fusion for end-to-end autonomous driving
    Huang, Zhenbo
    Sun, Shiliang
    Zhao, Jing
    Mao, Liang
    INFORMATION FUSION, 2023, 98
  • [33] Stabilization Approaches for Reinforcement Learning-Based End-to-End Autonomous Driving
    Chen, Siyuan
    Wang, Meiling
    Song, Wenjie
    Yang, Yi
    Li, Yujun
    Fu, Mengyin
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (05) : 4740 - 4750
  • [34] Interpretable End-to-End Urban Autonomous Driving With Latent Deep Reinforcement Learning
    Chen, Jianyu
    Li, Shengbo Eben
    Tomizuka, Masayoshi
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2022, 23 (06) : 5068 - 5078
  • [35] Generative Adversarial Imitation Learning for End-to-End Autonomous Driving on Urban Environments
    Karl Couto, Gustavo Claudio
    Antonelo, Eric Aislan
    2021 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2021), 2021,
  • [36] Attacking vision-based perception in end-to-end autonomous driving models
    Boloor, Adith
    Garimella, Karthik
    He, Xin
    Gill, Christopher
    Vorobeychik, Yevgeniy
    Zhang, Xuan
    JOURNAL OF SYSTEMS ARCHITECTURE, 2020, 110
  • [37] End-to-end Learning Approach for Autonomous Driving: A Convolutional Neural Network Model
    Wang, Yaqin
    Liu, Dongfang
    Jeon, Hyewon
    Chu, Zhiwei
    Matson, Eric T.
    PROCEEDINGS OF THE 11TH INTERNATIONAL CONFERENCE ON AGENTS AND ARTIFICIAL INTELLIGENCE (ICAART), VOL 2, 2019, : 833 - 839
  • [38] Investigating the Impact of Time-Lagged End-to-End Control in Autonomous Driving
    Asai, Haruna
    Hashimoto, Yoshihiro
    Lisi, Giuseppe
    INTELLIGENT HUMAN SYSTEMS INTEGRATION 2020, 2020, 1131 : 111 - 117
  • [39] Simple Physical Adversarial Examples against End-to-End Autonomous Driving Models
    Boloor, Adith
    He, Xin
    Gill, Christopher
    Vorobeychik, Yevgeniy
    Zhang, Xuan
    2019 IEEE INTERNATIONAL CONFERENCE ON EMBEDDED SOFTWARE AND SYSTEMS (ICESS), 2019,
  • [40] ICOP: Image-based Cooperative Perception for End-to-End Autonomous Driving
    Li, Lantao
    Cheng, Yujie
    Sun, Chen
    Zhang, Wenqi
    2024 35TH IEEE INTELLIGENT VEHICLES SYMPOSIUM, IEEE IV 2024, 2024, : 2367 - 2374