An End-to-End Broad Learning System for Event-Based Object Classification

被引:24
作者
Gao, Shan [1 ]
Guo, Guangqian [2 ]
Huang, Hanqiao [1 ]
Cheng, Xuemei [1 ]
Chen, C. L. Philip [1 ]
机构
[1] Northwestern Polytech Univ, Unmanned Syst Res Inst, Xian 710072, Peoples R China
[2] Shaanxi Univ Sci & Technol, Sch Elect & Control Engn, Xian 710021, Peoples R China
关键词
Cameras; Neural networks; Training; Learning systems; Kernel; Feature extraction; Convolution; Event camera; object classification; broad learning system; incremental learning; APPROXIMATION; FEATURES; DRIVEN;
D O I
10.1109/ACCESS.2020.2978109
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Event cameras are bio-inspired vision sensors measuring brightness changes (referred to as an 'event') for each pixel independently, instead of capturing brightness images at a fixed rate using conventional cameras. Asynchronous event data mixed with noise information is challenging for event-based vision tasks. In this paper, we propose a broad learning network for object detection using the event data. The broad learning network consists of two distinct layers, a feature-node layer and an enhancement-node layer. Different to convolutional neural networks, the broad learning network can be extended by adding nodes into layers during training. We design a gradient descent algorithm to train network parameters, which creates an event-based broad learning network in an end-to-end manner. Our model outperforms state-of-the-art models, specifically, because of the small scale and increased speed displayed by our model during training. This demonstrates the superiority of event cameras towards online training and inference.
引用
收藏
页码:45974 / 45984
页数:11
相关论文
共 57 条
[1]  
[Anonymous], P BRIT MACH VIS C
[2]  
[Anonymous], 2015, INT J COMPUT VISION, DOI DOI 10.1007/s11263-014-0788-3
[3]  
[Anonymous], 2018, ARXIV180206898
[4]  
[Anonymous], 2019, IEEE ACCESS
[5]  
[Anonymous], 2016, Theory and tools for the conversion of analog to spiking convolutional neural networks
[6]   Mapping from Frame-Driven to Frame-Free Event-Driven Vision Systems by Low-Rate Rate Coding and Coincidence Processing-Application to Feedforward ConvNets [J].
Antonio Perez-Carrasco, Jose ;
Zhao, Bo ;
Serrano, Carmen ;
Acha, Begona ;
Serrano-Gotarredona, Teresa ;
Chen, Shouchun ;
Linares-Barranco, Bernabe .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (11) :2706-2719
[7]   Event-Based Visual Flow [J].
Benosman, Ryad ;
Clercq, Charles ;
Lagorce, Xavier ;
Ieng, Sio-Hoi ;
Bartolozzi, Chiara .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2014, 25 (02) :407-417
[8]  
Berner Raphael, 2013, 2013 Symposium on VLSI Circuits, pC186
[9]   Extraction of temporally correlated features from dynamic vision sensors with spike-timing-dependent plasticity [J].
Bichler, Olivier ;
Querlioz, Damien ;
Thorpe, Simon J. ;
Bourgoin, Jean-Philippe ;
Gamrat, Christian .
NEURAL NETWORKS, 2012, 32 :339-348
[10]   MULTICHANNEL TEXTURE ANALYSIS USING LOCALIZED SPATIAL FILTERS [J].
BOVIK, AC ;
CLARK, M ;
GEISLER, WS .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 1990, 12 (01) :55-73