Automotive Object Detection via Learning Sparse Events by Spiking Neurons

被引:0
作者
Zhang, Hu [1 ]
Li, Yanchen [1 ]
Leng, Luziwei [2 ]
Che, Kaiwei [1 ]
Liu, Qian [2 ]
Guo, Qinghai [2 ]
Liao, Jianxing [2 ]
Cheng, Ran [1 ]
机构
[1] Southern Univ Sci & Technol, Shenzhen 518055, Peoples R China
[2] Huawei Technol Co Ltd, Adv Comp & Storage Lab, Shenzhen 518055, Peoples R China
关键词
Object detection; Task analysis; Neurons; Training; Vehicle dynamics; Feature extraction; Adaptation models; Deep learning; dynamical vision sensor (DVS); object detection; spiking neural networks (SNNs); NEURAL-NETWORKS; VISION;
D O I
10.1109/TCDS.2024.3410371
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Event-based sensors, distinguished by their high temporal resolution of 1 mu s and a dynamic range of 120 dB, stand out as ideal tools for deployment in fast-paced settings such as vehicles and drones. Traditional object detection techniques that utilize artificial neural networks (ANNs) face challenges due to the sparse and asynchronous nature of the events these sensors capture. In contrast, spiking neural networks (SNNs) offer a promising alternative, providing a temporal representation that is inherently aligned with event-based data. This article explores the unique membrane potential dynamics of SNNs and their ability to modulate sparse events. We introduce an innovative spike-triggered adaptive threshold mechanism designed for stable training. Building on these insights, we present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection. Comprehensive evaluations demonstrate that SpikeFPN surpasses both traditional SNNs and advanced ANNs enhanced with attention mechanisms. Evidently, SpikeFPN achieves a mean average precision (mAP) of 0.477 on the GEN1 automotive detection (GAD) benchmark dataset, marking significant increases over the selected SNN baselines. Moreover, the efficient design of SpikeFPN ensures robust performance while optimizing computational resources, attributed to its innate sparse computation capabilities.
引用
收藏
页码:2110 / 2124
页数:15
相关论文
共 99 条
  • [21] The Pascal Visual Object Classes (VOC) Challenge
    Everingham, Mark
    Van Gool, Luc
    Williams, Christopher K. I.
    Winn, John
    Zisserman, Andrew
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2010, 88 (02) : 303 - 338
  • [22] Fang W, 2021, ADV NEUR IN, V34
  • [23] Spiking Neural Network Based on Multi-Scale Saliency Fusion for Breast Cancer Detection
    Fu, Qiang
    Dong, Hongbin
    [J]. ENTROPY, 2022, 24 (11)
  • [24] The SpiNNaker Project
    Furber, Steve B.
    Galluppi, Francesco
    Temple, Steve
    Plana, Luis A.
    [J]. PROCEEDINGS OF THE IEEE, 2014, 102 (05) : 652 - 665
  • [25] Gerstner W, 2014, NEURONAL DYNAMICS: FROM SINGLE NEURONS TO NETWORKS AND MODELS OF COGNITION, P1, DOI 10.1017/CBO9781107447615
  • [26] NAS-FPN: Learning Scalable Feature Pyramid Architecture for Object Detection
    Ghiasi, Golnaz
    Lin, Tsung-Yi
    Le, Quoc V.
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 7029 - 7038
  • [27] SPIKING NEURAL NETWORKS
    Ghosh-Dastidar, Samanwoy
    Adeli, Hojjat
    [J]. INTERNATIONAL JOURNAL OF NEURAL SYSTEMS, 2009, 19 (04) : 295 - 308
  • [28] Rich feature hierarchies for accurate object detection and semantic segmentation
    Girshick, Ross
    Donahue, Jeff
    Darrell, Trevor
    Malik, Jitendra
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 580 - 587
  • [29] Effective Fusion Factor in FPN for Tiny Object Detection
    Gong, Yuqi
    Yu, Xuehui
    Ding, Yao
    Peng, Xiaoke
    Zhao, Jian
    Han, Zhenjun
    [J]. 2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 1159 - 1167
  • [30] Systematic generation of biophysically detailed models for diverse cortical neuron types
    Gouwens, Nathan W.
    Berg, Jim
    Feng, David
    Sorensen, Staci A.
    Zeng, Hongkui
    Hawrylycz, Michael J.
    Koch, Christof
    Arkhipov, Anton
    [J]. NATURE COMMUNICATIONS, 2018, 9