ELiSeD - An Event-Based Line Segment Detector

被引:27
作者
Brandli, Christian [1 ]
Strubel, Jonas [1 ]
Keller, Susanne [1 ]
Scaramuzza, Davide [2 ]
Delbruck, Tobi [1 ]
机构
[1] Univ Zurich, Inst Neuroinformat, Winterthurerstr 190, Zurich, Switzerland
[2] Univ Zurich, Robot & Percept Grp, Andreasstr 15m, Zurich, Switzerland
来源
2016 2ND INTERNATIONAL CONFERENCE ON EVENT-BASED CONTROL, COMMUNICATION, AND SIGNAL PROCESSING (EBCCSP) | 2016年
关键词
event-based; computer vision; machine vision; line segment detector; visual feature; DVS; DAVIS; silicon retina;
D O I
10.1109/EBCCSP.2016.7605244
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Event-based temporal contrast vision sensors such as the Dynamic Vison Sensor (DVS) have advantages such as high dynamic range, low latency, and low power consumption. Instead of frames, these sensors produce a stream of events that encode discrete amounts of temporal contrast. Surfaces and objects with sufficient spatial contrast trigger events if they are moving relative to the sensor, which thus performs inherent edge detection. These sensors are well-suited for motion capture, but so far suitable event-based, low-level features that allow assigning events to spatial structures have been lacking. A general solution of the so-called event correspondence problem, i.e. inferring which events are caused by the motion of the same spatial feature, would allow applying these sensors in a multitude of tasks such as visual odometry or structure from motion. The proposed Event-based Line Segment Detector (ELiSeD) is a step towards solving this problem by parameterizing the event stream as a set of line segments. The event stream which is used to update these low-level features is continuous in time and has a high temporal resolution; this allows capturing even fast motions without the requirement to solve the conventional frame-to-frame motion correspondence problem. The ELiSeD feature detector and tracker runs in real-time on a laptop computer at image speeds of up to 1300 pix/s and can continuously track rotations of up to 720 deg/s. The algorithm is open-sourced in the jAER project.
引用
收藏
页数:7
相关论文
共 23 条
[1]  
[Anonymous], 2006, P BRIT MACH VIS C
[2]  
Benosman R., 2013, IEEE T NEURAL NETW L
[3]   A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor [J].
Brandli, Christian ;
Berner, Raphael ;
Yang, Minhao ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) :2333-2341
[4]  
Carmack J., 2014, 20 MILLISECONDS
[5]  
Censi A., 2015, 2015 AM CONTR C CHIC
[6]   ELLIPTIC FIT OF OBJECTS IN 2 AND 3 DIMENSIONS BY MOMENT OF INERTIA OPTIMIZATION [J].
CHAUDHURI, BB ;
SAMANTA, GP .
PATTERN RECOGNITION LETTERS, 1991, 12 (01) :1-7
[7]   A Pencil Balancing Robot using a Pair of AER Dynamic Vision Sensors [J].
Conradt, J. ;
Cook, M. ;
Berner, R. ;
Lichtsteiner, P. ;
Douglas, R. J. ;
Delbruck, T. .
ISCAS: 2009 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOLS 1-5, 2009, :781-784
[8]  
Delbruck T., 2008, P INT S SECURE LIF, P21
[9]   Robotic goalie with 3 ms reaction time at 4% CPU load using event-based dynamic vision sensor [J].
Delbruck, Tobi ;
Lang, Manuel .
FRONTIERS IN NEUROSCIENCE, 2013, 7
[10]  
Delbrück T, 2010, IEEE INT SYMP CIRC S, P2426, DOI 10.1109/ISCAS.2010.5537149