Deep SCNN-Based Real-Time Object Detection for Self-Driving Vehicles Using LiDAR Temporal Data

被引:29
作者
Zhou, Shibo [1 ]
Chen, Ying [2 ]
Li, Xiaohua [1 ]
Sanyal, Arindam [3 ]
机构
[1] SUNY Binghamton, Dept Elect & Comp Engn, Binghamton, NY 13902 USA
[2] Harbin Inst Technol, Sch Management, Dept Management Sci & Engn, Harbin 150000, Peoples R China
[3] SUNY Buffalo, Dept Elect Engn, Buffalo, NY 14260 USA
基金
中国国家自然科学基金;
关键词
Spiking convolutional neural network; LiDAR temporal data; energy consumption; real-time object detection; NEURAL-NETWORKS;
D O I
10.1109/ACCESS.2020.2990416
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Real-time accurate detection of three-dimensional (3D) objects is a fundamental necessity for self-driving vehicles. Most existing computer vision approaches are based on convolutional neural networks (CNNs). Although the CNN-based approaches can achieve high detection accuracy, their high energy consumption is a severe drawback. To resolve this problem, novel energy efficient approaches should be explored. Spiking neural network (SNN) is a promising candidate because it has orders-of-magnitude lower energy consumption than CNN. Unfortunately, the studying of SNN has been limited in small networks only. The application of SNN for large 3D object detection networks has remain largely open. In this paper, we integrate spiking convolutional neural network (SCNN) with temporal coding into the YOLOv2 architecture for real-time object detection. To take the advantage of spiking signals, we develop a novel data preprocessing layer that translates 3D point-cloud data into spike time data. We propose an analog circuit to implement the non-leaky integrate and fire neuron used in our SCNN, from which the energy consumption of each spike is estimated. Moreover, we present a method to calculate the network sparsity and the energy consumption of the overall network. Extensive experiments have been conducted based on the KITTI dataset, which show that the proposed network can reach competitive detection accuracy as existing approaches, yet with much lower average energy consumption. If implemented in dedicated hardware, our network could have a mean sparsity of 56.24% and extremely low total energy consumption of 0.247mJ only. Implemented in NVIDIA GTX 1080i GPU, we can achieve 35.7 fps frame rate, high enough for real-time object detection.
引用
收藏
页码:76903 / 76912
页数:10
相关论文
共 33 条
[1]  
[Anonymous], ARXIV190407537
[2]  
[Anonymous], 2017, Frustum PointNets for 3D Object Detection from RGB-D Data
[3]  
[Anonymous], 2017, ARXIV170602413
[4]  
[Anonymous], 2018, ARXIV180306199
[5]   Lidar System Architectures and Circuits [J].
Behroozpour, Behnam ;
Sandborn, Phillip A. M. ;
Wu, Ming C. ;
Boser, Bernhard E. .
IEEE COMMUNICATIONS MAGAZINE, 2017, 55 (10) :135-142
[6]  
Biswas A, 2018, ISSCC DIG TECH PAP I, P488, DOI 10.1109/ISSCC.2018.8310397
[7]   Multi-View 3D Object Detection Network for Autonomous Driving [J].
Chen, Xiaozhi ;
Ma, Huimin ;
Wan, Ji ;
Li, Bo ;
Xia, Tian .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6526-6534
[8]  
Comsa JM, 2020, INT CONF ACOUST SPEE, P8529, DOI [10.1109/ICASSP40776.2020.9053856, 10.1109/icassp40776.2020.9053856]
[9]   Shape Completion using 3D-Encoder-Predictor CNNs and Shape Synthesis [J].
Dai, Angela ;
Qi, Charles Ruizhongtai ;
Niessner, Matthias .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :6545-6554
[10]  
Dayan P, 2001, Theoretical neuroscience