Asynchronous, Photometric Feature Tracking Using Events and Frames

被引:84
作者
Gehrig, Daniel
Rebecq, Henri [1 ]
Gallego, Guillermo
Scaramuzza, Davide
机构
[1] Univ Zurich, Dept Informat, Zurich, Switzerland
来源
COMPUTER VISION - ECCV 2018, PT XII | 2018年 / 11216卷
基金
瑞士国家科学基金会;
关键词
VISUAL ODOMETRY; VISION; SLAM;
D O I
10.1007/978-3-030-01258-8_46
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a method that leverages the complementarity of event cameras and standard cameras to track visual features with low-latency. Event cameras are novel sensors that output pixel-level brightness changes, called "events". They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, because the same scene pattern can produce different events depending on the motion direction, establishing event correspondences across time is challenging. By contrast, standard cameras provide intensity measurements (frames) that do not depend on motion direction. Our method extracts features on frames and subsequently tracks them asynchronously using events, thereby exploiting the best of both types of data: the frames provide a photometric representation that does not depend on motion direction and the events provide low-latency updates. In contrast to previous works, which are based on heuristics, this is the first principled method that uses raw intensity measurements directly, based on a generative event model within a maximum-likelihood framework. As a result, our method produces feature tracks that are both more accurate (subpixel accuracy) and longer than the state of the art, across a wide variety of scenes.
引用
收藏
页码:766 / 781
页数:16
相关论文
共 36 条
  • [1] Simultaneous Optical Flow and Intensity Estimation from an Event Camera
    Bardow, Patrick
    Davison, Andrew J.
    Leutenegger, Stefan
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 884 - 892
  • [2] A METHOD FOR REGISTRATION OF 3-D SHAPES
    BESL, PJ
    MCKAY, ND
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1992, 14 (02) : 239 - 256
  • [3] A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor
    Brandli, Christian
    Berner, Raphael
    Yang, Minhao
    Liu, Shih-Chii
    Delbruck, Tobi
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) : 2333 - 2341
  • [4] Chaudhry R, 2009, PROC CVPR IEEE, P1932, DOI 10.1109/CVPRW.2009.5206821
  • [5] A Motion-Based Feature for Event-Based Pattern Recognition
    Clady, Xavier
    Maro, Jean-Matthieu
    Barre, Sebastien
    Benosman, Ryad B.
    [J]. FRONTIERS IN NEUROSCIENCE, 2017, 10
  • [6] Asynchronous event-based corner detection and matching
    Clady, Xavier
    Ieng, Sio-Hoi
    Benosman, Ryad
    [J]. NEURAL NETWORKS, 2015, 66 : 91 - 106
  • [7] Parametric image alignment using enhanced correlation coefficient maximization
    Evangelidis, Georgios D.
    Psarakis, Emmanouil Z.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (10) : 1858 - 1865
  • [8] SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems
    Forster, Christian
    Zhang, Zichao
    Gassner, Michael
    Werlberger, Manuel
    Scaramuzza, Davide
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2017, 33 (02) : 249 - 265
  • [9] A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation
    Gallego, Guillermo
    Rebecq, Henri
    Scaramuzza, Davide
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 3867 - 3876
  • [10] Event-Based, 6-DOF Camera Tracking from Photometric Depth Maps
    Gallego, Guillermo
    Lund, Jon E. A.
    Mueggler, Elias
    Rebecq, Henri
    Delbruck, Tobi
    Scaramuzza, Davide
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (10) : 2402 - 2412