A Spiking Neural Network Model of Depth from Defocus for Event- based Neuromorphic Vision

被引:28
|
作者
Haessig, Germain [1 ]
Berthelon, Xavier [1 ]
Ieng, Sio-Hoi [1 ]
Benosman, Ryad [1 ,2 ,3 ]
机构
[1] Sorbonne Univ, INSERM, CNRS, Inst Vis, 17 Rue Moreau, F-75012 Paris, France
[2] Univ Pittsburgh, Med Ctr, Biomed Sci Tower 3,Fifth Ave, Pittsburgh, PA 15213 USA
[3] Carnegie Mellon Univ, Inst Robot, 5000 Forbes Ave, Pittsburgh, PA 15213 USA
基金
欧洲研究理事会;
关键词
LIQUID LENS; IMAGE BLUR; ACCOMMODATION; INFORMATION; CIRCUITS; CODE;
D O I
10.1038/s41598-019-40064-0
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Depth from defocus is an important mechanism that enables vision systems to perceive depth. While machine vision has developed several algorithms to estimate depth from the amount of defocus present at the focal plane, existing techniques are slow, energy demanding and mainly relying on numerous acquisitions and massive amounts of filtering operations on the pixels' absolute luminance value. Recent advances in neuromorphic engineering allow an alternative to this problem, with the use of event-based silicon retinas and neural processing devices inspired by the organizing principles of the brain. In this paper, we present a low power, compact and computationally inexpensive setup to estimate depth in a 3D scene in real time at high rates that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. Exploiting the high temporal resolution of the event-based silicon retina, we are able to extract depth at 100 Hz for a power budget lower than a 200 mW (10 mW for the camera, 90 mW for the liquid lens and similar to 100 mW for the computation). We validate the model with experimental results, highlighting features that are consistent with both computational neuroscience and recent findings in the retina physiology. We demonstrate its efficiency with a prototype of a neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological depth from defocus experiments reported in the literature.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] A Neuromorphic Tactile Perception System Based on Spiking Neural Network for Texture Recognition
    Liu, Ziyong
    Wang, Xiaoxin
    Xiang, Guiyao
    Wang, Zhiyong
    Shao, Yitian
    Liu, Honghai
    INTELLIGENT ROBOTICS AND APPLICATIONS, ICIRA 2024, PT IX, 2025, 15209 : 176 - 191
  • [22] Eye Tracking Based on Event Camera and Spiking Neural Network
    Jiang, Yizhou
    Wang, Wenwei
    Yu, Lei
    He, Chu
    ELECTRONICS, 2024, 13 (14)
  • [23] Neuromorphic vision: From sensors to event-based algorithms
    Lakshmi, Annamalai
    Chakraborty, Anirban
    Thakur, Chetan S.
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2019, 9 (04)
  • [24] A spiking neural network model for obstacle avoidance in simulated prosthetic vision
    Ge, Chenjie
    Kasabov, Nikola
    Liu, Zhi
    Yang, Jie
    INFORMATION SCIENCES, 2017, 399 : 30 - 42
  • [25] A neuromorphic depth-from-motion vision model with STDP adaptation
    Yang, ZJ
    Murray, A
    Wörgötter, F
    Cameron, K
    Boonsobhak, V
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2006, 17 (02): : 482 - 495
  • [26] Classification of multivariate data with a spiking neural network on neuromorphic hardware
    Michael Schmuker
    Thomas Pfeil
    Martin P Nawrot
    BMC Neuroscience, 14 (Suppl 1)
  • [27] Adversarial attacks on spiking convolutional neural networks for event-based vision
    Buechel, Julian
    Lenz, Gregor
    Hu, Yalun
    Sheik, Sadique
    Sorbaro, Martino
    FRONTIERS IN NEUROSCIENCE, 2022, 16
  • [28] Neuromorphic Implementation of Spiking Relational Neural Network for Motor Control
    Zhao, Jingyue
    Donati, Elisa
    Indiveri, Giacomo
    2020 2ND IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2020), 2020, : 89 - 93
  • [29] Asynchronous Bioplausible Neuron for Spiking Neural Networks for Event-Based Vision
    Kachole, Sanket
    Sajwani, Hussain
    Naeini, Fariborz Baghaei
    Makris, Dimitrios
    Zweiri, Yahya
    COMPUTER VISION - ECCV 2024, PT LXIV, 2025, 15122 : 399 - 415
  • [30] StereoSpike: Depth Learning With a Spiking Neural Network
    Rancon, Ulysse
    Cuadrado-Anibarro, Javier
    Cottereau, Benoit R.
    Masquelier, Timothee
    IEEE ACCESS, 2022, 10 : 127428 - 127439