IDOL: A Framework for IMU-DVS Odometry using Lines

被引:25
作者
Le Gentil, Cedric [1 ,2 ]
Tschopp, Florian [1 ]
Alzugaray, Ignacio [3 ]
Vidal-Calleja, Teresa [2 ]
Siegwart, Roland [1 ]
Nieto, Juan [1 ]
机构
[1] Swiss Fed Inst Technol, Autonomous Syst Lab, Zurich, Switzerland
[2] Univ Technol Sydney, Sch Mech & Mechatron Engn, Ctr Autonomous Syst, Sydney, NSW, Australia
[3] Swiss Fed Inst Technol, Vis Robot Lab, Zurich, Switzerland
来源
2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS) | 2020年
关键词
ROBUST; VERSATILE; MOTION; SLAM;
D O I
10.1109/IROS45743.2020.9341208
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In this paper, we introduce IDOL, an optimization-based framework for IMIJ-DVS Odometry using Lines. Event cameras, also called Dynamic Vision Sensors (DVSs), generate highly asynchronous streams of events triggered upon illumination changes for each individual pixel. This novel paradigm presents advantages in low illumination conditions and high-speed motions. Nonetheless, this unconventional sensing modality brings new challenges to perform scene reconstruction or motion estimation. The proposed method offers to leverage a continuous-time representation of the inertial readings to associate each event with timely accurate inertial data. The method's front-end extracts event clusters that belong to line segments in the environment whereas the back-end estimates the system's trajectory alongside the lines' 3D pasition by minimizing point-to-line distances between individual events and the lines' projection in the image space. A novel attraction/repulsion mechanism is presented to accurately estimate the lines' extremities, avoiding their explicit detection in the event data. The proposed method is benchmarked against a state-of-the-art frame-based visual-inertial odometry framework using public datasets. The results show that IDOL performs at the same order of magnitude on most datasets and even shows better orientation estimates. These findings can have a great impact on new algorithms for DVS.
引用
收藏
页码:5863 / 5870
页数:8
相关论文
共 37 条
[21]   Speed Invariant Time Surface for Learning to Detect Corner Points with Event-Based Cameras [J].
Manderscheid, Jacques ;
Sironi, Amos ;
Bourdis, Nicolas ;
Migliore, Davide ;
Lepetit, Vincent .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :10237-10246
[22]  
Mueggler E, 2015, ROBOTICS: SCIENCE AND SYSTEMS XI
[23]   Continuous-Time Visual-Inertial Odometry for Event Cameras [J].
Mueggler, Elias ;
Gallego, Guillermo ;
Rebecq, Henri ;
Scaramuzza, Davide .
IEEE TRANSACTIONS ON ROBOTICS, 2018, 34 (06) :1425-1440
[24]   The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM [J].
Mueggler, Elias ;
Rebecq, Henri ;
Gallego, Guillermo ;
Delbruck, Tobi ;
Scaramuzza, Davide .
INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (02) :142-149
[25]   ORB-SLAM: A Versatile and Accurate Monocular SLAM System [J].
Mur-Artal, Raul ;
Montiel, J. M. M. ;
Tardos, Juan D. .
IEEE TRANSACTIONS ON ROBOTICS, 2015, 31 (05) :1147-1163
[26]   VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator [J].
Qin, Tong ;
Li, Peiliang ;
Shen, Shaojie .
IEEE TRANSACTIONS ON ROBOTICS, 2018, 34 (04) :1004-1020
[27]  
Rebecq H., 2017, P BRIT MACH VIS C BM
[28]   EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time [J].
Rebecq, Henri ;
Horstschaefer, Timo ;
Gallego, Guillermo ;
Scaramuzza, Davide .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2017, 2 (02) :593-600
[29]  
Schneider Thomas, 2018, IEEE Robotics and Automation Letters, V3, P1418, DOI 10.1109/LRA.2018.2800113
[30]  
Siegwart R., 2011, Introduction to autonomous mobile robots