Asynchronous Event-based Cooperative Stereo Matching Using Neuromorphic Silicon Retinas

被引:0
|
作者
Mohsen Firouzi
Jörg Conradt
机构
[1] Technische Universität München,Neuroscientific System Theory
[2] Bernstein Center for Computational Neuroscience,Graduate School of Systemic Neurosciences
[3] Ludwig-Maximilian University of Munich,undefined
来源
Neural Processing Letters | 2016年 / 43卷
关键词
Silicon retina; Event-based stereo matching; Cooperative network; Frameless 3D vision; Disparity detection;
D O I
暂无
中图分类号
学科分类号
摘要
Biologically-inspired event-driven silicon retinas, so called dynamic vision sensors (DVS), allow efficient solutions for various visual perception tasks, e.g. surveillance, tracking, or motion detection. Similar to retinal photoreceptors, any perceived light intensity change in the DVS generates an event at the corresponding pixel. The DVS thereby emits a stream of spatiotemporal events to encode visually perceived objects that in contrast to conventional frame-based cameras, is largely free of redundant background information. The DVS offers multiple additional advantages, but requires the development of radically new asynchronous, event-based information processing algorithms. In this paper we present a fully event-based disparity matching algorithm for reliable 3D depth perception using a dynamic cooperative neural network. The interaction between cooperative cells applies cross-disparity uniqueness-constraints and within-disparity continuity-constraints, to asynchronously extract disparity for each new event, without any need of buffering individual events. We have investigated the algorithm’s performance in several experiments; our results demonstrate smooth disparity maps computed in a purely event-based manner, even in the scenes with temporally-overlapping stimuli.
引用
收藏
页码:311 / 326
页数:15
相关论文
共 50 条
  • [1] Asynchronous Event-based Cooperative Stereo Matching Using Neuromorphic Silicon Retinas
    Firouzi, Mohsen
    Conradt, Joerg
    NEURAL PROCESSING LETTERS, 2016, 43 (02) : 311 - 326
  • [2] Asynchronous Event-Based Binocular Stereo Matching
    Rogister, Paul
    Benosman, Ryad
    Ieng, Sio-Hoi
    Lichtsteiner, Patrick
    Delbruck, Tobi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (02) : 347 - 353
  • [3] Event-based stereo matching using semiglobal matching
    Xie, Zhen
    Zhang, Jianhua
    Wang, Pengfei
    INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2018, 15 (01):
  • [4] Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras
    Ni, Zhenjiang
    Ieng, Sio-Hoi
    Posch, Christoph
    Regnier, Stephane
    Benosman, Ryad
    NEURAL COMPUTATION, 2015, 27 (04) : 925 - 953
  • [5] Event-based 3D reconstruction from neuromorphic retinas
    Carneiro, Joao
    Ieng, Sio-Hoi
    Posch, Christoph
    Benosman, Ryad
    NEURAL NETWORKS, 2013, 45 : 27 - 38
  • [6] Asynchronous event-based corner detection and matching
    Clady, Xavier
    Ieng, Sio-Hoi
    Benosman, Ryad
    NEURAL NETWORKS, 2015, 66 : 91 - 106
  • [7] Event-based silicon retinas for fast digital vision
    Delbruck, Tobi
    Lichtsteiner, Patrick
    Berner, Raphael
    Conradt, Jorg
    Liu, Shih-Chii
    NEUROSCIENCE RESEARCH, 2010, 68 : E31 - E31
  • [8] An Active Approach to Solving the Stereo Matching Problem using Event-Based Sensors
    Martel, Julien N. P.
    Mueller, Jonathan
    Conradt, Joerg
    Sandamirskaya, Yulia
    2018 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2018,
  • [9] Learning Local Event-based Descriptor for Patch-based Stereo Matching
    Liu, Peigen
    Chen, Guang
    Li, Zhijun
    Tang, Huajin
    Knoll, Alois
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2022), 2022,
  • [10] A 4096 channel event-based multielectrode array with asynchronous outputs compatible with neuromorphic processors
    Cartiglia, Matteo
    Costa, Filippo
    Narayanan, Shyam
    Bui, Cat-Vu H.
    Ulusan, Hasan
    Risi, Nicoletta
    Haessig, Germain
    Hierlemann, Andreas
    Cardes, Fernando
    Indiveri, Giacomo
    NATURE COMMUNICATIONS, 2024, 15 (01)