luvHarris: A Practical Corner Detector for Event-Cameras

被引:14
作者
Glover, Arren [1 ]
Dinale, Aiko [1 ]
Rosa, Leandro De Souza [1 ]
Bamford, Simeon [1 ]
Bartolozzi, Chiara [1 ]
机构
[1] Ist Italiano Tecnol, Event Driven Percept Robot Grp, I-16163 Genoa, Italy
关键词
Event-driven vision; robotic-vision; event-camera; corner-detection; real-time;
D O I
10.1109/TPAMI.2021.3135635
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
There have been a number of corner detection methods proposed for event cameras in the last years, since event-driven computer vision has become more accessible. Current state-of-the-art have either unsatisfactory accuracy or real-time performance when considered for practical use, for example when a camera is randomly moved in an unconstrained environment. In this paper, we present yet another method to perform corner detection, dubbed look-up event-Harris (luvHarris), that employs the Harris algorithm for high accuracy but manages an improved event throughput. Our method has two major contributions, 1. a novel "threshold ordinal event-surface" that removes certain tuning parameters and is well suited for Harris operations, and 2. an implementation of the Harris algorithm such that the computational load per event is minimised and computational heavy convolutions are performed only 'as-fast-as-possible', i.e., only as computational resources are available. The result is a practical, real-time, and robust corner detector that runs more than 2.6x the speed of current state-of-the-art; a necessity when using a high-resolution event-camera in real-time. We explain the considerations taken for the approach, compare the algorithm to current state-of-the-art in terms of computational performance and detection accuracy, and discuss the validity of the proposed approach for event cameras.
引用
收藏
页码:10087 / 10098
页数:12
相关论文
共 18 条
[1]   Asynchronous Corner Detection and Tracking for Event Cameras in Real Time [J].
Alzugaray, Ignacio ;
Chli, Margarita .
IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (04) :3177-3184
[2]  
[Anonymous], 2013, INT J RECENT TECHNOL
[3]  
Bradski G., 2000, DOBBS J SOFTW TOOLS
[4]   A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor [J].
Brandli, Christian ;
Berner, Raphael ;
Yang, Minhao ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) :2333-2341
[5]   Detecting Stable Keypoints from Events through Image Gradient Prediction [J].
Chiberre, Philippe ;
Perot, Etienne ;
Sironi, Amos ;
Lepetit, Vincent .
2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, :1387-1394
[6]  
Harris C., 1988, P ALV VIS C, P10
[7]  
Li RX, 2019, IEEE INT C INT ROBOT, P6223, DOI [10.1109/IROS40897.2019.8968491, 10.1109/iros40897.2019.8968491]
[8]   A 128x128 120 dB 15 μs latency asynchronous temporal contrast vision sensor [J].
Lichtsteiner, Patrick ;
Posch, Christoph ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2008, 43 (02) :566-576
[9]  
Manderscheid J., 2019, PROC IEEECVF C COMPU, p10 237
[10]  
Mueggler E., 2017, BRIT MACH VIS C BMVC, P1