Contour Motion Estimation for Asynchronous Event-Driven Cameras

被引:42
作者
Barranco, Francisco [1 ,2 ,3 ]
Fermueller, Cornelia [1 ,2 ]
Aloimonos, Yiannis [1 ,2 ]
机构
[1] Univ Maryland, Dept Comp Sci, College Pk, MD 20742 USA
[2] Univ Maryland, UMIACS, College Pk, MD 20742 USA
[3] Univ Granada, Dept Comp Architecture & Technol, CITIC, ETSIIT, E-18071 Granada, Spain
基金
美国国家科学基金会;
关键词
Asynchronous event-based vision; motion contour; neuromorphic devices; real-time systems; CHIP; SEGMENTATION; INTEGRATION; PATTERNS; SPIKING;
D O I
10.1109/JPROC.2014.2347207
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper compares image motion estimation with asynchronous event-based cameras to Computer Vision approaches using as input frame-based video sequences. Since dynamic events are triggered at significant intensity changes, which often are at the border of objects, we refer to the event-based image motion as "contour motion.'' Algorithms are presented for the estimation of accurate contour motion from local spatio-temporal information for two camera models: the dynamic vision sensor (DVS), which asynchronously records temporal changes of the luminance, and a family of new sensors which combine DVS data with intensity signals. These algorithms take advantage of the high temporal resolution of the DVS and achieve robustness using a multiresolution scheme in time. It is shown that, because of the coupling of velocity and luminance information in the event distribution, the image motion estimation problem becomes much easier with the new sensors which provide both events and image intensity than with the DVS alone. Experiments on synthesized data from computer vision benchmarks show that our algorithm on combined data outperforms computer vision methods in accuracy and can achieve real-time performance, and experiments on real data confirm the feasibility of the approach. Given that current image motion (or so-called optic flow) methods cannot estimate well at object boundaries, the approach presented here could be used complementary to optic flow techniques, and can provide new avenues for computer vision motion research.
引用
收藏
页码:1537 / 1556
页数:20
相关论文
共 50 条
[41]   A QVGA 143 dB Dynamic Range Frame-Free PWM Image Sensor With Lossless Pixel-Level Video Compression and Time-Domain CDS [J].
Posch, Christoph ;
Matolin, Daniel ;
Wohlgenannt, Rainer .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2011, 46 (01) :259-275
[42]  
Raudies F., 2013, Scholarpedia, V8, P30724, DOI [10.4249/scholarpedia.30724, DOI 10.4249/SCHOLARPEDIA.30724]
[43]  
Rieke F., 1999, SPIKES EXPLORING THE
[44]   Rapid global shifts in natural scenes block spiking in specific ganglion cell types [J].
Roska, B ;
Werblin, F .
NATURE NEUROSCIENCE, 2003, 6 (06) :600-608
[45]   Parallel processing in retinal ganglion cells: How integration of space-time patterns of excitation and inhibition form the spiking output [J].
Roska, Botond ;
Molnar, Alyosha ;
Werblin, Frank S. .
JOURNAL OF NEUROPHYSIOLOGY, 2006, 95 (06) :3810-3822
[46]  
Shepherd G. M., 1990, THE SYNAPTIC ORGANIZ
[47]   A Quantitative Analysis of Current Practices in Optical Flow Estimation and the Principles Behind Them [J].
Sun, Deqing ;
Roth, Stefan ;
Black, Michael J. .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2014, 106 (02) :115-137
[48]   Secrets of Optical Flow Estimation and Their Principles [J].
Sun, Deqing ;
Roth, Stefan ;
Black, Michael J. .
2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, :2432-2439
[49]   A duality based approach for realtime TV-L1 optical flow [J].
Zach, C. ;
Pock, T. ;
Bischof, H. .
PATTERN RECOGNITION, PROCEEDINGS, 2007, 4713 :214-+
[50]  
Zeki S., 1993, VISION OF THE BRAIN