Optical flow estimation from event-based cameras and spiking neural networks

被引:16
|
作者
Cuadrado, Javier [1 ]
Rancon, Ulysse [1 ]
Cottereau, Benoit R. [1 ,2 ]
Barranco, Francisco [3 ]
Masquelier, Timothee [1 ]
机构
[1] Univ Toulouse III, CerCo, CNRS, UMR 5549, Toulouse, France
[2] CNRS IRL 2955, IPAL, Singapore, Singapore
[3] Univ Granada, Dept Comp Engn Automat & Robot, CITIC, Granada, Spain
基金
新加坡国家研究基金会;
关键词
optical flow; event vision; spiking neural networks; neuromorphic computing; edge AI; POWER;
D O I
10.3389/fnins.2023.1160034
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Event-based cameras are raising interest within the computer vision community. These sensors operate with asynchronous pixels, emitting events, or "spikes", when the luminance change at a given pixel since the last event surpasses a certain threshold. Thanks to their inherent qualities, such as their low power consumption, low latency, and high dynamic range, they seem particularly tailored to applications with challenging temporal constraints and safety requirements. Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs), since the coupling of an asynchronous sensor with neuromorphic hardware can yield real-time systems with minimal power requirements. In this work, we seek to develop one such system, using both event sensor data from the DSEC dataset and spiking neural networks to estimate optical flow for driving scenarios. We propose a U-Net-like SNN which, after supervised training, is able to make dense optical flow estimations. To do so, we encourage both minimal norm for the error vector and minimal angle between ground-truth and predicted flow, training our model with back-propagation using a surrogate gradient. In addition, the use of 3d convolutions allows us to capture the dynamic nature of the data by increasing the temporal receptive fields. Upsampling after each decoding stage ensures that each decoder's output contributes to the final estimation. Thanks to separable convolutions, we have been able to develop a light model (when compared to competitors) that can nonetheless yield reasonably accurate optical flow estimates.
引用
收藏
页数:12
相关论文
共 50 条
  • [21] Event-based backpropagation can compute exact gradients for spiking neural networks
    Timo C. Wunderlich
    Christian Pehle
    Scientific Reports, 11
  • [22] Bio-inspired Event-based Motion Analysis with Spiking Neural Networks
    Oudjail, Veis
    Martinet, Jean
    VISAPP: PROCEEDINGS OF THE 14TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS, VOL 4, 2019, : 389 - 394
  • [23] Event-based Action Recognition Using Motion Information and Spiking Neural Networks
    Liu, Qianhui
    Xing, Dong
    Tang, Huajin
    Ma, De
    Pan, Gang
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 1743 - 1749
  • [24] Event-based backpropagation can compute exact gradients for spiking neural networks
    Wunderlich, Timo C.
    Pehle, Christian
    SCIENTIFIC REPORTS, 2021, 11 (01)
  • [25] Movement Classification and Segmentation Using Event-Based Sensing and Spiking Neural Networks
    Kirkland, Paul
    Di Caterina, Gaetano
    2022 SENSOR SIGNAL PROCESSING FOR DEFENCE CONFERENCE, SSPD, 2022, : 51 - 55
  • [26] Toward Event-Based State Estimation for Neuromorphic Event Cameras
    Liu, Xinhui
    Cheng, Meiqi
    Shi, Dawei
    Shi, Ling
    IEEE TRANSACTIONS ON AUTOMATIC CONTROL, 2023, 68 (07) : 4281 - 4288
  • [27] Spiking neural networks for frame-based and event-based single object localization
    Barchid, Sami
    Mennesson, Jose
    Eshraghian, Jason
    Djeraba, Chaabane
    Bennamoun, Mohammed
    NEUROCOMPUTING, 2023, 559
  • [28] Retinomorphic Event-Based Vision Sensors: Bioinspired Cameras With Spiking Output
    Posch, Christoph
    Serrano-Gotarredona, Teresa
    Linares-Barranco, Bernabe
    Delbruck, Tobi
    PROCEEDINGS OF THE IEEE, 2014, 102 (10) : 1470 - 1484
  • [29] EAGAN: Event-based attention generative adversarial networks for optical flow and depth estimation
    Lin, Xiuhong
    Yang, Chenhui
    Bian, Xuesheng
    Liu, Weiquan
    Wang, Cheng
    IET COMPUTER VISION, 2022, 16 (07) : 581 - 595
  • [30] Self-Supervised High-Order Information Bottleneck Learning of Spiking Neural Network for Robust Event-Based Optical Flow Estimation
    Yang, Shuangming
    Linares-Barranco, Bernabe
    Wu, Yuzhu
    Chen, Badong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (04) : 2280 - 2297