Asynchronous Event-Based Fourier Analysis

被引:18
作者
Sabatier, Quentin [1 ,2 ,3 ,4 ]
Ieng, Sio-Hoi [1 ,2 ,3 ]
Benosman, Ryad [1 ,2 ,3 ]
机构
[1] UPMC Univ Paris 06, Sorbonne Univ, F-75252 Paris, France
[2] Inst Vis, UMR S 968, F-75012 Paris, France
[3] CNRS, UMR 7210, F-75012 Paris, France
[4] Gensight Biol, F-75012 Paris, France
关键词
Address event representation (AER); event-based processing; fast Fourier transform; neuromorphic vision; SIGNAL; PERFORMANCE; VISION; FFT;
D O I
10.1109/TIP.2017.2661702
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a method to compute the FFT of a visual scene at a high temporal precision of around 1-mu s output from an asynchronous event-based camera. Event-based cameras allow to go beyond the widespread and ingrained belief that acquiring series of images at some rate is a good way to capture visual motion. Each pixel adapts its own sampling rate to the visual input it receives and defines the timing of its own sampling points in response to its visual input by reacting to changes of the amount of incident light. As a consequence, the sampling process is no longer governed by a fixed timing source but by the signal to be sampled itself, or more precisely by the variations of the signal in the amplitude domain. Event-based cameras acquisition paradigm allows to go beyond the current conventional method to compute the FFT. The event-driven FFT algorithm relies on a heuristic methodology designed to operate directly on incoming gray level events to update incrementally the FFT while reducing both computation and data load. We show that for reasonable levels of approximations at equivalent frame rates beyond the millisecond, the method performs faster and more efficiently than conventional image acquisition. Several experiments are carried out on indoor and outdoor scenes where both conventional and event-driven FFT computation is shown and compared.
引用
收藏
页码:2192 / 2202
页数:11
相关论文
共 45 条
[1]  
[Anonymous], 2016, INTEL INTEGRATED PER
[2]  
[Anonymous], 2016, POWERFFT ASIC
[3]  
[Anonymous], 2013, P VLSI CIRC VLSIC 20
[4]  
[Anonymous], 2016, Intel Math Kernel Library for Deep Neural Networks
[5]   A low-power, high-performance, 1024-point FFT processor [J].
Baas, BM .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 1999, 34 (03) :380-387
[6]   Event-Based Visual Flow [J].
Benosman, Ryad ;
Clercq, Charles ;
Lagorce, Xavier ;
Ieng, Sio-Hoi ;
Bartolozzi, Chiara .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (02) :407-417
[7]   Asynchronous Event-Based Hebbian Epipolar Geometry [J].
Benosman, Ryad ;
Ieng, Sio-Hoi ;
Rogister, Paul ;
Posch, Christoph .
IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (11) :1723-1734
[8]   A 240 x 180 130 dB 3 μs Latency Global Shutter Spatiotemporal Vision Sensor [J].
Brandli, Christian ;
Berner, Raphael ;
Yang, Minhao ;
Liu, Shih-Chii ;
Delbruck, Tobi .
IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2014, 49 (10) :2333-2341
[9]  
Chan Y. K., 2008, Progress In Electromagnetics Research B, V1, P269, DOI 10.2528/PIERB07102301
[10]   WHAT IS FAST FOURIER TRANSFORM [J].
COCHRAN, WT ;
COOLEY, JW ;
FAVIN, DL ;
HELMS, HD ;
KAENEL, RA ;
LANG, WW ;
MALING, GC ;
NELSON, DE ;
RADER, CM ;
WELCH, PD .
PROCEEDINGS OF THE INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS, 1967, 55 (10) :1664-+